An Intuitively Plausible Understanding of the Mathematical Representations of Quantum Phenomena I

Bhupinder Singh Anand
11 min readAug 16, 2024

--

Distinguishing between algorithmic verifiability and algorithmic computability

I: A Quantum Thesis: Distinguishing between algorithmic verifiability and algorithmic computability

In an earlier Medium essay, we showed how the famous Poincaré-Hilbert debate — on whether or not the Axiom Schema of Finite Induction in Arithmetic (which can today be taken to be the first-order Peano Arithmetic PA) could be treated as ‘constructive’ in some sense — dissolves once we make a distinction (introduced in [An16]) between:

Definition 1. (Algorithmic verifiability) A number-theoretical relation F(x) is algorithmically verifiable if, and only if, for any specifiable natural number n, there is a deterministic algorithm AL(F, n) which can provide objective evidence for deciding the truth/falsity of each proposition in the finite sequence {F(1), F(2), . . . , F(n)}.

Definition 2. (Algorithmic computability) A number theoretical relation F(x) is algorithmically computable if, and only if, there is a deterministic algorithm AL(F) that can provide objective evidence for deciding the truth/falsity of each proposition in the denumerable sequence {F(1), F(2), . . .}.

We also note further that:

  • Algorithmic computability implies the existence of a unique algorithm that can finitarily decide the truth/falsity of each proposition in a well-defined denumerable sequence of propositions; whose ‘truth’ values are therefore both deterministic and predictable; whereas
  • Algorithmic verifiability does not imply the existence of a unique algorithm that can finitarily decide the truth/falsity of each proposition in a well-defined denumerable sequence of propositions; whose ‘truth’ values are therefore deterministic, but not necessarily predictable.

We note that although every algorithmically computable proposition is algorithmically verifiable, the converse is not true (see [An16], Theorem 1; [An22], §7.G, Theorem 7.2).

From the point of view of a finitary mathematical philosophy (see [An22], §1, Thesis 1) — which is the constraint within which an applied science ought to ideally operate (see also this preprint)— a significant consequence of the difference between algorithmic verifiability and algorithmic computability for the physical sciences is that we can now articulate a plausible:

Quantum Thesis: Quantum Mechanics is simply the evidence-based recognition that:

- any properties of an elementary particle,

- which can be assumed representable as determinate and predictable in a recursively well-defined — hence deterministic — mathematical model M(t) at time t,

- by algorithmically computable functions before an ‘observation’,

- can only be mathematically represented after the ‘observation’,

- in any putative model M(t+),

- which is assumed to be a conservative extension of M(t) before any further ‘observation’,

- by algorithmically verifiable, but not algorithmically computable, functions,

- whose values are determinate, but not predictable;

- since any ‘observation’ introduces an essential discontinuity in the deterministic model M(t) due to a ‘Butterfly Effect’.

Conservative extension. A theory T2 is a conservative extension of a theory T1 if the language of T2 extends the language of T1; that is, every theorem of T1 is a theorem of T2, and any theorem of T2 in the language of T1 is already a theorem of T1.

Butterfly Effect: In chaos theory, the ‘Butterfly Effect’ is the sensitive dependence on initial conditions in which a small change in one state of a deterministic, nonlinear, system can result in large differences in a later state.

Now, QM recognizes that:

- no physical ‘observation’ can be replicated exactly and,

- on the basis of experimental data,

- it is possible to mathematically quantify the ‘spread’ of a series of controlled replications at the quantum level by means of Schrödinger’s equations.

The Quantum Thesis now suggests further that:

- accepting Schrödinger’s equations as faithfully representing the deterministic ‘spread’ evidenced in results of controlled experimental observations at the quantum level,

- if attributable to undeniable sensitive dependence on initial conditions,

- entails that the underlying laws of nature are, indeed, deterministic.

Moreover, accepting that such initial conditions are essentially indeterminate entails that — except through, unfalsifiably subjective, ‘divine’ revelation — Bell’s Theorem entails that we can never mathematically formulate an algorithmically computable — hence predictable — law that faithfully represents the underlying quantum behaviour as both deterministic and predictable.

Further, since Bell’s Theorem only applies if the properties of an elementary particle are assumed to be mathematically representable as algorithmically computable by the same (hidden variable) function both before and after an observation, we cannot therefore conclude from Bell’s argumentation that our mathematical representations of quantum phenomena, if accepted as essentially algorithmically verifiable but not algorithmically computable, must admit non-locality under the Quantum Thesis.

II: A further significant consequence of the difference between algorithmic verifiability and algorithmic computability for the physical sciences

A further significant consequence of the difference between algorithmic verifiability and algorithmic computability for the physical sciences is that we may now treat the decimal representation of a real number as corresponding to a physically measurable limit (in the sense of a physically ‘completable’ infinite sequence as needed to resolve Zeno’s famous arrow paradox) — and not only to a mathematically definable limit — if, and only if, such representation is definable by an algorithmically computable function (see [An22], §7.I.a).

We note that although every algorithmically computable relation is algorithmically verifiable, the converse is not true (see [An22], §7.G, Theorem 7.2).

III: Some well-known functions that are algorithmically verifiable but not algorithmically computable

We further note that:

(i) All the mathematically defined functions known to, and used by, science are algorithmically computable, including those that define transcendental numbers such as π, e, etc. They can be computed algorithmically as they are all definable as the limit of some well-defined denumerable series of rationals.

(ii) The existence of mathematical constants that are defined by functions which are algorithmically verifiable but not algorithmically computable — suggested most famously by Georg Cantor’s diagonal argument — has been a philosophically debatable deduction.

Such existential deductions have been viewed with both suspicion and scepticism by scientists such as Henri Poincaré, L. E. J. Brouwer, etc., and disputed most vociferously on philosophical grounds by others such as Ludwig Wittgenstein ([Wi78]).

(iii) A constructive definition of an arithmetical Boolean function [(∀x)R(x)] that is true under any well-defined Tarskian interpretation of his Peano Arithmetic P — hence algorithmically verifiable — but not provable in P — hence algorithmically not computable (see [An22], §2.F, Corollary 2.21) — was given by Kurt Gödel in his 1931 paper [Go31] on formally undecidable arithmetical propositions.

(iv) The definition of a number-theoretic function that is algorithmically verifiable, but not algorithmically computable, was also given by Alan Turing in his 1936 paper [Tu36] on computable numbers; where he defined a halting function, say H(n), that is 0 if, and only if, the Turing machine with code number n halts on input n.

Such a function is mathematically well-defined — hence algorithmically verifiable — but assuming that it defines an algorithmically computable real number leads to a contradiction.

Turing thus concluded the mathematical existence of algorithmically uncomputable real numbers.

(v) A definition of a number-theoretic function that is algorithmically verifiable but not algorithmically computable was given by Gregory Chaitin in [Ct82]; where he defined a class of constants — denoted by Ω — which is such that if C(n) is the n’th digit in the decimal expression of an Ω constant, then the function C(x) is algorithmically verifiable but not algorithmically computable.

IV: Are some physical constants also algorithmically uncomputable?

The question arises: Are some physical constants representable by real numbers which are definable only by algorithmically verifiable but not algorithmically computable functions?

The possibility is suggested by the following perspective of one of the challenging issues in physics, which seeks to theoretically determine the magnitude of some fundamental dimensionless constants:

[quote] … the numerical values of dimensionless physical constants are independent of the units used. These constants cannot be eliminated by any choice of a system of units. Such constants include:

• α, the fine structure constant, the coupling constant for the electromagnetic interaction (≈ 1/137.036). Also the square of the electron charge, expressed in Planck units. This defines the scale of charge of elementary particles with charge.

• μ or β, the proton-to-electron mass ratio, the rest mass of the proton divided by that of the electron (≈ 1836.15). More generally, the rest masses of all elementary particles relative to that of the electron.

• α_s, the coupling constant for the strong force (≈ 1)

• α_G, the gravitational coupling constant, which is the square of the electron mass, expressed in Planck units. This defines the scale of the mass of elementary particles.

At the present time, the values of the dimensionless physical constants cannot be calculated; they are determined only by physical measurement. This is one of the unsolved problems of physics. …

The list of fundamental dimensionless constants decreases when advances in physics show how some previously known constant can be computed in terms of others. A long-sought goal of theoretical physics is to find first principles from which all of the fundamental dimensionless constants can be calculated and compared to the measured values. A successful ‘Theory of Everything’ would allow such a calculation, but so far, this goal has remained elusive. [unquote] … Dimensionless physical constant — Wikipedia

From the perspective of Definitions 1 and 2, we could now suggest that:

Thesis 1. (Dimensionless constants) Some of the dimensionless physical constants are only representable in a mathematical language as ‘unmeasurable’ real numbers that are defined by quantum functions which are algorithmically verifiable, but not algorithmically computable.

In other words, we cannot treat such constants as denoting — even in principle — a measurable limit, as we could a constant that is representable mathematically by a real number that is definable by algorithmically computable functions.

One reason why dimensionless physical constants may not be representable in a mathematical language L as ‘measurable’ real numbers might be that they are determined only at a measurement by the mathematical model M(L) of the physical phenomena sought to be represented by the model before the measurement.

If so, the value of a dimensionless constant in a mathematical model M(L) of physical phenomena before a measurement must differ from the value of the dimensionless constant in a post-facto mathematical model M(L1) of the phenomena which seeks to predict the consequences of the measurement.

In other words, if the values of some dimensionless constants are defined by ‘quantum’ functions which are algorithmically verifiable, but not algorithmically computable, then they are of an evolving nature (compare with Gisin’s perspective in [Gi19] and [Gi20] that such functions are relatively random; see also [An22], §7.G, Definition 27).

Such values ‘unfold’ only as, and when, a measurement on a putative model M(L) — of a mathematical theory L that seeks to faithfully represent observations of some physical phenomena before a measurement — is sought to be integrated into a putative model M(L1) of a conservative extension L1 of the theory L after the measurement (in the sense of [An22], §23.B.a).

V: Classical and Neo-classical laws of nature

From the point of view of mathematical philosophy, this distinction would be expressed by the theses (the significance of this is highlighted in [An22], §20.C; see also [An22], §7.I.a, Theorem 7.6):

Thesis 2. (Unmeasurable constants) Whilst a symbol for an ‘unmeasurable’ physical constant may be introduced into a physical theory as a primitive term without inviting inconsistency in the theory, the sequence of digits in the decimal representation of the ‘measure’ of an ‘unmeasurable’ physical constant cannot be treated in the mathematical language of the theory as a ‘completed’ denumerable sequence whose ‘measure’ is the Cauchy limit of the sequence.

Thesis 3. (Measurable constants) The sequence corresponding to the decimal representation of the ‘measure’ of a ‘measurable’ physical constant, when introduced as a primitive term into a physical theory, can be treated as a ‘completed’ denumerable sequence, whose ‘measure’ is the Cauchy limit of the sequence in the mathematical language of the theory, without inviting inconsistency.

We note that Zeno’s paradoxical arguments (see [Rus37], pp.347–353; as qualified, however, by [An22], §20.C.b) highlight the philosophical and theological dichotomy between our essentially ‘continuous’ perception of the physical reality that we seek to capture with our measurements, and the essential ‘discreteness’ of any mathematical language of arithmetic in which we seek to express such measurements categorically.

The evidence-based distinction between algorithmically verifiable and algorithmically computable arithmetical functions could be seen as reflecting the dichotomy mathematically.

The evidence-based distinction between algorithmic verifiability (Definition 1) and algorithmic computability (Definition 2) suggests that classical mechanics could be held as complete with respect to the algorithmically computable representation of our observations of physical phenomena, in the sense that:

Thesis 4. (Classical laws) Classical laws of nature determine the nature and behaviour of all those properties of the physical world which are mathematically describable completely at any moment of time t(n) by algorithmically computable functions from a given initial state at time t(0).

In other words, Classical laws are characterised by the property that if a physical process is representable by a Cauchy sequence, then the limit of the sequence corresponds to a limiting state of the physical process.

However:

Thesis 5. (Neo-classical laws) Neo-classical laws of nature determine the nature and behaviour of those properties of the physical world which are describable completely at any moment of time t(n) by algorithmically verifiable functions; however such properties are not completely describable by algorithmically computable functions from any given initial state at time t(0).

In other words, Neo-classical laws are characterised by the property that if a physical process is representable by a Cauchy sequence, then the limit of the sequence need not correspond to a limiting state of the physical process; which may require an additional, conceivably probabilistic, law to deterministically govern the permissible states of the physical process at the limit.

Since such behaviour follows fixed laws and is thus determinate (even if not algorithmically predictable by classical laws, since their limiting states are revealed only as probabilities), the hypothetical universe considered in [An22], §20.D.c suggests that Albert Einstein could have been justified in his belief (in the sense of [An22], §13.F), reiterated in 1943 to William Hermanns:

[quote] As I have said so many times, God doesn’t play dice with the world. [unquote] … Hermanns: [Her83], p.58.

‘Justified’ from the evidence-based perspective of this investigation if, by ‘play dice’, Einstein’s belief/remark could be interpreted:

- as intending that the probabilities considered in current quantum-mechanical descriptions of physical processes cannot be taken to be defined over a probability space globally (compare [An22], §22.A, Theorem 22.3);

- but allowing the possibility that such probabilities may be definable in terms of locally definable probability spaces (as argued in the case of defining the probability of an integer n being a prime in [An22], §22.A and §22.A.d).

In Part II of this exposition—intended for a lay audience and extracted essentially from my book [An22], §23 — we shall consider the consequences of the above distinction between algorithmic verifiability, and algorithmic computability, for an intuitively plausible understanding of the mathematical representations of quantum phenomena; as addressed by the following:

Quantum Query. What would introducing experimental observations — which implicitly subsume ‘free will’ — into a mathematical model entail?

References

[An16] Bhupinder Singh Anand. 2016. The truth assignments that differentiate human reasoning from mechanistic reasoning: The evidence-based argument for Lucas’ Gödelian thesis. In Cognitive Systems Research. Volume 40, December 2016, 35–45.

https://www.dropbox.com/s/lrt2xq0wlzo3wxx/s

[An22] … 2024. The Significance of Evidence-based Reasoning in Mathematics, Mathematics Education, Philosophy, and the Natural Sciences. Second edition (Revision). DBA Publishing, Mumbai, India.

https://www.dropbox.com/s/gd6ffwf9wssak86/

[Ct82] Gregory J. Chaitin. 1982. Gödel’s Theorem and Information. In International Journal of Theoretical Physics (1982) 21, pp. 941–954. Kluwer Academic Publishers-Plenum Publishers.

https://doi.org/10.1007/BF02084159

[Gi19] Nicolas Gisin. 2019. Real numbers are the hidden variables of classical mechanics. In Quantum Studies: Mathematics and Foundations (2019). SpringerLink, Springer Nature Switzerland AG.

https://doi.org/10.1007/s40509-019-00211-8

[Gi20] … 2020. Mathematical languages shape our understanding of time in physics. In Nature Physics, Volume 16, pp.114–116. Springer Nature Limited.

https://doi.org/10.1038/s41567-019-0748-5

[Go31] Kurt Gödel. 1931. On formally undecidable propositions of Principia Mathematica and related systems I. Translated by Elliott Mendelson. In M. Davis (ed.), 1965, The Undecidable. Raven Press, New York.

[Her83] William Hermanns and Albert Einstein. 1983. Einstein and the Poet: In Search of the Cosmic Man. Biography and Autobiography, Branden Press, 1983.

https://en.wikiquote.org/wiki/Albert_Einstein

[Rus37] Bertrand Russell. 1937. The Principles of Mathematics. Second Edition, 1937. George Allen & Unwin Ltd., London.

[Tu36] Alan Turing. 1936. On computable numbers, with an application to the Entscheidungsproblem. In M. Davis (ed.), 1965, The Undecidable. Raven Press, New York. Reprinted from the Proceedings of the London Mathematical Society, ser. 2. vol. 42 (1936–7), pp.230–265; corrections, Ibid, vol 43 (1937) pp.544–546.

[Wi78] Ludwig Wittgenstein. 1937. Remarks on the Foundations of Mathematics. 1978 ed., MIT Press, Cambridge, Massachusetts.

--

--

Bhupinder Singh Anand

Reviewing classical interpretations of Cantor’s, Gödel’s, Tarski’s, and Turing’s reasoning and addressing some grey areas in the foundations of math/phil/logic