The Beginning of Infinite Regress

Adam Tomas
Conjecture Magazine
31 min readDec 10, 2020
EliasSch (Pixabay)

The problem with Wittgenstein is better classified as a paradox. The man was and remains a polarising figure. Intense, ascetic, and commanding of authority, an encounter with Wittgenstein could be uniquely memorable. Our contemporary culture embraces this image, depicting him as the profound yet tortured genius, whom we could not possibly understand. We tend to celebrate those philosophers who lead the most romantic lives. And Wittgenstein satisfies this desire to a tee. In the only book he ever published, Tractatus Logico-Philosophicus, Wittgenstein lays out a numbered set of declarations with little discussion and leaps of logic that demand interpretation. Wittgenstein’s prose commands authority. For each declaration is an invitation from the reader to beg the question, “what could you mean Herr Wittgenstein?” Whether intended or not, this type of mystique is something the human mind relishes. The comforting simplicity of deferring to an authority answers all questions — “Why is it so? Because the King says it is.” But authoritarianism is paradoxical. The King was never subjected to questioning. If he were, then we would soon realise that there need not be any logic to his various declarations. We would have to make peace with any and all contradictions and accept the world is as he decrees. And in the end: Everyone in the kingdom lies to each other and to themselves.

Given our propensity to seek comfort in authority, is it no surprise that we romanticise the very concept of paradox? Is it the familiarity that brings us comfort, or is it something else? Is it the apparent complexity that makes us feel intelligent? In any case, paradoxes are nothing more than boundaries. They are an indication that we must change our dimension. The Tractatus did reveal something. It revealed a paradox at the foundations of language (and logic) — language cannot be turned back upon itself to say anything of meaning. This was an important finding. Published in 1921, the Tractatus is an early proof that foundationalism and authoritarianism are one and the same. But Wittgenstein failed to see this. Embracing paradox and the misconception that all philosophy reduces to a critique of language, he declared an end to philosophy with a final proclamation: “whereof one cannot speak, thereof one must remain silent.” Wittgenstein then left the academy for several years, having stifled his creativity by his own authority.

In the intervening years, the Vienna Circle drew inspiration from the Tractatus. Building on the Empiricism of the Enlightenment, the group sought to unite philosophy with science and demarcate between meaningful and meaningless knowledge. By 1929, the group promoted a manifesto with verification as its criterion of demarcation. According to the Circle, any statement that could not be brought back to an empirical observation would be both unscientific and meaningless. The epistemology, or theory of knowledge, that they developed is known as Logical Empiricism or Logical Positivism. But, with empirical observation as a bedrock, they too were enamoured with the same foundationalism as Wittgenstein and the classical Empiricists.

Karl Popper, a close associate of the Circle, challenged the verification criterion and, in 1934, with the publication of The Logic of Scientific Discovery, supplanted it with falsification. He announced that he had “killed positivism,” but he did more than this: Popper also killed inductive inference and foundationalism. Popper’s epistemology, known as Critical Rationalism, remains our best theory for how knowledge grows, what is scientific and how we must deal with the misconception of foundations. But, it does not enjoy widespread understanding, allowing Empiricism to fill a void and continue as the prevailing conception. Why this is the case is not clear. Popper was the happy holder of many commendations, honorary doctorates, and even a knighthood. In contrast with Wittgenstein, his writings are clear, thorough, and try not to invoke interpretation. However, few modern-day scientists appear to have taken the time to read Popper, and the academic establishment would rather relish in the romance of a tortured genius. To be clear, scientists do indeed champion Popper’s concept of falsification, but many still embrace the misconception that Empiricism holds some weight or that inductive inference contains knowledge. Like Wittgenstein, Popper was known to be fiercely confrontational, and his style of argument had the same thoroughness of his writing: Popper would do all that he could to steelman his opponent’s argument before proceeding to demolish it. This is admirable. But it lacks the persuasive authority of aloofness and proclamation that characterised dealings with Wittgenstein. Perhaps Popper would be taken more seriously had he given less, had he written and decreed and acted as the Philosopher King.

Within this essay I will drive another nail into the coffin of inductive inference. I will do this with the aid of a simple thought experiment and the introduction of a simple abstraction. In so doing, I will show that other thinkers have driven their own nails into the very same coffin, but we didn’t notice as we were too entranced by the shiny lights of paradox. Our starting point will be Critical Rationalism as advanced by David Deutsch in his two landmark books, The Fabric of Reality and The Beginning of Infinity.

The Empiricist Schema

Calling oneself an empiricist or a Bayesian remains a badge of honour, and saying things like “evidence-based” can be a sure-fire way to signal to the world that you are a rational thinker. But these are not examples of thinking, for they presuppose that knowledge can grow out of inductive inferences. An Empiricist Schema of knowledge growth would proceed as follows:

Observations are made -> A theory is inferred from the observations -> More observations are made to justify the theory

On the surface, the schema seems reasonable. However there are several things wrong with it. Let’s work backward from justification. The concept of justification is itself a misconception. Should I arrive at a justification for my theory, then I will be necessitated to justify my justification. Without some type of bedrock justification, I am destined for infinite regress. Justificationism bleeds into foundationalism. This is precisely why the search for foundational bedrock has preoccupied so many. That there can be an observational bedrock is illusory, for the act of observation requires an observer to bring a theory to the table. Those with young children (themselves armed with theory) can see the process in action; theory precedes observation, or, as Popper would declare: all observation is theory-laden. There can be no such thing as pure observation, which also rules out the first step in the schema. With no bedrock, the only alternative is fallibilism. Our knowledge is always a best attempt, and this is precisely what opens the door to creativity and progress. And our mechanisms for error-correction is the only ‘foundation’ that makes sense. The naive falsificationist might look to replace the third step in the schema and pronounce it fixed, but that’s not going to make sense when the starting point is so wrong.

A theory is inferred? Making generalisations from repeated observation is dubbed induction, or inductive inference or reasoning. But, as we shall see, there is no reasoning here. Nonetheless, the myth that an inductive inference contains information remains pervasive and is often in hiding. Many arrive at the so-called ‘problem of induction’ via Hume, Russell and more recently Taleb, but humans have been concerned about the problem as far back as Sextus Empiricus. Hume ponders swans, Russell contemplates chickens, and Taleb rephrases using turkeys: imagine a turkey who is fed by the butcher everyday at 5pm. Over time and with each confirming observation, the turkey comes to believe that the butcher loves him. One fateful Thanksgiving the turkey receives the surprise of his life — he learns of his true predicament. All inductivists are at risk of being turkeys. The so-called problem of induction acknowledges that no amount of confirming observations can give the turkey confidence in his knowledge. But the so-called problem is not a problem at all, as the turkey has not inferred anything; the turkey has a theory about his predicament — let’s not forget that his observations are theory-laden. Even if we clean up the turkey’s inference and reframe it as a specific generalisation — food comes everyday at 5pm — it remains theory-laden. In any case, an inductive inference is nothing more than an infinite regress in reverse. Any attempt to work backwards and justify a generalisation falls into infinite regress and, as Popper rightfully declares, there is therefore no problem with inductive inference; it simply is not possible, making the prevailing conception a delusion.

Critical Rationalism: Deutschian Schema

Notice the asymmetry with induction, and all other forms of justification. Hume put it best when he first formulated the problem of induction — no amount of white swan watching could increase his confidence in the supposition that all swans are white, but the observation of just one black swan could give him 100% confidence in the supposition’s falsity. Critical Rationalism acknowledges this asymmetry as a feature of reality. Another way to consider the asymmetry is that we can never be sure that we are right about anything, but that it is possible to know when we are wrong. Knowledge necessarily grows by criticism, via a process of conjectures and refutations. The Deutschian Schema proceeds as follows:

A problem is identified -> potential explanations are conjectured -> explanations are criticised -> bad theories are replaced -> new problems emerge

With no such thing as pure observation, Critical Rationalism is problem-driven — it is about solving problems. Within the schema, problems emerge from existing theory or from conflict between opposing theories, and solutions are proposed and criticised through creative reasoning and discussion before being subjected to empirical criticism. Empirical criticism takes the form of falsification: the consummate scientist will seek empirical tests designed to destroy their beloved creation. This is known as Popper’s Criterion of Demarcation: that which is falsifiable is scientific knowledge. An explanation that is conceivably falsifiable is better than one which is not, however falsifiability as observation is theory-laden, so although it may demarcate between scientific and unscientific, it is not a criterion for demarcating between good and bad explanations. Deutsch argues the good explanations are ‘hard to vary’: the harder to vary, the better the explanation. Still, an even better explanation will display ‘reach’ to new and interesting problems and will also solve existing problems. A good explanation that is not falsifiable may one day be, when we better understand that explanation. Notice also that the schema does not exclude knowledge that is unscientific; that cannot be criticised empirically. Such knowledge is not meaningless, it simply requires that falsification, difficulty to vary, and reach be sought through non-empirical criticism alone. The study of epistemology itself may be unscientific, but this has nothing to say about its importance. Contrary to Critical Rationalism, the prevailing conception implies meaninglessness for knowledge that cannot be empirically criticised.

Superior though it is, there is an element missing from the Deutschian Schema. Often the addition of an element to a theory is superfluous; and as more elements are added, a theory’s eventual demise becomes likely. But in this case, I daresay that I have made a discovery of an element that’s been lurking in the background, hiding in its own simplicity.

The thought experiment

“A well-known scientist (some say it was Bertrand Russell) once gave a public lecture on astronomy. He described how the earth orbits around the sun and how the sun, in turn, orbits around the center of a vast collection of stars called our galaxy. At the end of the lecture, a little old lady at the back of the room got up and said: ‘What you have told us is rubbish. The world is really a flat plate supported on the back of a giant tortoise.’ The scientist gave a superior smile before replying, ‘What is the tortoise standing on?’ ‘You’re very clever, young man, very clever,’ said the old lady. ‘But it’s turtles all the way down!’”

- Stephen Hawking, A Brief History of Time (1988)

Imagine that you are a pirate. That’s right, you’re a pirate — who doesn’t want to be a pirate? With a preternatural aptitude for the dark arts of schooner-jacking and booty-raiding, you’ve amassed the resources to cruise in a ship that goes the distance when it comes to international naval travel. The fear from other sea-dwellers is palpable when the pirate flag of your very conspicuous vessel peaks up and over the horizon. Given your dominance, you’re liable to sail your vessel for long periods of time, and you don’t change directions for anyone. Recently, you’ve started to notice an interesting pattern. Each time you set out on one of those voyages where you don’t stop for anyone, you eventually find yourself in a place where you’ve last been — the place at which you first set out to sail.

What are you to make of this observation? Let’s say that you decide to put the excitement of your domineering lifestyle on hold in order to pursue more contemplative endeavours. You decide to systematically repeat your observation. You’ll use the same vessel and wave your pirate flag at full mast, so you won’t have to stop for anyone. They’ll scurry away at the mere sight of you. You set up base at your exclusive island home and, drawing from what you learned in pirate school, you make sure to record your base with an ‘X marking the spot.’ You set sail north, knowing that whenever you circumnavigate an offending landmass, you will once again regain course using your trusty sextant (it was a gift from your mother). Sure enough, after a long voyage, and a few looting layovers, you catch sight of your ‘X marking the spot’ through your spyglass. What have you learned about your predicament? You decide that maybe this is a feature of your predicament; and you devise the following statement: Travelling in one direction always leads to the place of origin. After a little consideration and a few pirate ales, you decide that you must set sail once again.

You set sail once again, only this time you travel east. And after another long voyage you again catch sight of your ‘X marking the spot’. More consideration, more pirate ales; you decide to go again. And again the same result. You are now dealing head-first with the so-called problem of induction. This is going to require barrels of pirate ale. You’ve traversed the seas, you’ve been to distant lands and you’ve heard all the stories — the Earth is a flat disc supported by a flying tortoise; no it’s a turtle; an elephant; it’s floating on nothing, but there is a waterfall at the end — “the end of what?” “The end of the Earth, stupid!” Several barrels later, you reach an impasse: No matter how many times you set sail, you can never be confident that you won’t one day meet your end with a final descent over the edge of the Earth. Will you meet the turtles all the way down?

By now you may be asking yourself why I have gone to the trouble of devising a new thought experiment to illustrate the failure of inductive inference. But I am not just using turtles in place of swans, chickens and turkeys. There is a distinct difference between my thought experiment and those offered by Hume, Russell and Taleb. In my thought experiment, the resulting generalisation happens to be true, but you can’t have confidence without an explanation. It is unmistakable that although true, the generalisation does not lead to an explanation for your predicament. This is not surprising as generalisations are not explanations. Moreover, the generalisation is a literal dead-end: it does not lead to new problems or questions. All it does is leave unanswered the same question — why does this particular generalisation appear to hold? “Why is it that I always come back to my place of origin when I set sail in a single direction?” The generalisation is your problem. You keep experiencing the problem with each voyage as it conflicts with your existing knowledge; your received wisdom that there is an edge to the Earth. What’s a pirate to do?

Before you do anything, we can visualise your predicament as I have in Figure 1. Your ‘X marking the spot’ resides in the centre of an abstraction that I call the problem-space. The problem-space is a depiction of a phenomenon that requires an explanation. It is the place at which theory and reality meet each other. A problem-space has the curious feature that it is not completely discernible until an explanation is discovered — a problem-space and its corresponding explanation are discovered together. Every theory therefore consists of both a problem-space and a corresponding explanation. A problem-space is partially discernible via problems (when two theories are in conflict). Within the thought experiment, you do not yet know what the problem-space is, but we all know that in this case it is the surface of the Earth.

Students of Popper should not mistake the problem-space with a Popperian problem-situation. Popper acknowledges that problems emerge from situations; and situations are contextual. An avid problem-solver’s situation includes, among other contextual variables, the problem-solver’s received wisdom. Your problem-situation includes your received wisdom that the Earth rests upon an infinite regress of turtles, along with all and any other pirate folklore you bring to your problem-solving endeavours. In contrast, the problem-space is invariant; it will always be the surface of the Earth, no matter your starting situation.

One day you find yourself drowning your epistemological sorrows with more pirate ales at the Rusty Hook. A cheerful man walks in, lets call him Poppeye. He sits down next to you and orders a whiskey. You’ve been ale-ing on all day and there’s little you can do to stop yourself from yabbering away at him about your problem. Eventually Poppeye is struck by insight: “Maybe you don’t have a problem. Maybe the problem has you.” In your inebriated state, and agitated by Poppeye’s somewhat cryptic advice, you thank him dearly, settle the tab and stumble home to bed. You sleep on your vessel as the years at sea have created the perverse effect that you get seasick when sleeping on land. You descend into a deep sleep and Poppeye’s words reverberate through the gaps of your mind as you go down. Passing through consciousness and time, you re-emerge to find yourself lucid dreaming; you are at the wheel of your vessel, you are deep naval — surrounded by the ocean. You have the growing sense that the confines of your body are illusory; you start to float and you are seeing yourself from above; at the wheel of your vessel, surrounded by the ocean. You keep floating higher; you and your vessel are shrinking and the expanse of the ocean envelops everything. Over in the distance you see your ‘X marking the spot’ appear on the horizon and your vessel is travelling toward it. But wait, what do you see? You see it, and after a momentary pause it hits you in a flash — yes, it’s curvature. Shaken from your dream, you jump out of bed yelling, “the earth is a ball!”

Congratulations. You are now the proud owner of your very own creative insight, which is depicted as an explanation within Figure 2. Let’s circle back and spend some time discussing Figure 1. Recall that an inductive inference is classically defined as generalising from particular observations. The arrows on Figure 1 illustrate each of your one-way journeys: when you come to the edge of the problem-space, you appear at the opposing edge and remain on a path to dock at your ‘X marking the spot’. For each particular journey, upon successfully returning to your ‘X marking the spot’, you can state the following — this time, travelling in one direction leads to the place of origin. You wish to make the following generalisation: travelling in one direction always leads to the place of origin. Notice that each of your observations has you traversing the problem-space. Also, notice that your generalisation is simply a restatement of observations on the problem-space and of your initial problem — you’ve not said anything novel about your predicament. You are confined to the problem-space, but it remains indiscernible to you. In contrast, your creative insight was quite literally a leap into the void to conjecture an explanation for your predicament on the problem-space. Notice that the explanation for your predicament exists in another dimension. The surface of the earth is (more or less) 2-dimensional, but that surface is wrapped around a 3-dimensional sphere — you will never meet the turtles. Also, notice that the problem-space itself becomes discernible to you when you have the explanation; they come as a pair. Lastly, notice that you never actually solve your problem; it dissolves to become a feature of the problem-space; a constant; a generalisation that is true but unprovable within the confines of the problem-space.

The Principle of Dimensionality

I conjecture the existence of an epistemological principle that I will call The Principle of Dimensionality: All explanations necessarily reside in a separate dimension to the problem-space that they purport to explain. I realise that it may seem as though I have committed an intellectual sleight of hand, but I will show that the principle is already known and has simply been overshadowed by our propensity to romanticise paradox. A corollary of the Principle of Dimensionality is that we can recast the definition of inductive inference beyond the classic definition of generalising from particulars: inductive inference is any attempt to use a problem-space to explain itself. Inductive inference is also a self-reference paradox.

The term ‘paradox’ has been hijacked as a descriptor for several different situations, but I use it to refer only to a self-reference paradox. Some so-called paradoxes are merely misconceptions of existing theory or romanticised language for knowledge that seems counter-intuitive. The Twin Paradox from Einstein’s Special Relativity is an example. Other so-called paradoxes are actual problems (that also may be misconceptions), such as the Fermi Paradox. Self-reference paradoxes lead to logical contradiction, but each one also hides an infinite regress: all self-reference leads to infinite regress on the one hand and paradox on the other. A simple example of a self-reference paradox is known as ‘The Liar’s Paradox’ and it is specified as follows:

This statement is false.

If the statement is true, then it evaluates as false; and if the statement is false, then it evaluates as true. This is a contradiction. Another way to interpret it reveals the infinite regress — If the statement is indeed false then it must be true, but if it is true then it must be false, which means that it is true, and then false, then true…ad infinitum.

In 1931, a member of the Vienna Circle, the logician Kurt Gödel, modified The Liar’s Paradox to make a mathematical statement about mathematics itself. He illustrated that there could be no axiomatic bedrock for mathematics. The perceived need for a foundation in mathematics was formally laid out by David Hilbert in his now infamous Program. Along with the possibility to work backwards from any theorem to the bedrock axioms, Hilbert set out four requirements: completeness, consistency, conservation, and decidability. Gödel demonstrated that completeness and consistency could not be established from within a mathematical system.

Notice something important — we can modify The Liars Paradox to a statement about provability — this statement is not provable. The slight modification means that the statement may evaluate as false or ‘true but unprovable.’ With his first Incompleteness Theorem, Gödel constructed a self-referential mathematical statement — this mathematical statement is not provable. He then illustrated that the statement was true but unprovable from within the system; the system was incomplete. One could get around this by going outside of the system to formalise the statement as an axiom, but then there would be another self-referential statement about the new system that would be true but unprovable, and so on, ad infinitum. In a second theorem, Gödel demonstrated that a mathematical system could not prove its own consistency. By first assuming a system proved its own consistency, Gödel showed as per his first theorem that the system would contain a self-referential statement that is true but unprovable. And since the statement is true, Gödel could assert that the statement would be provable within the system. Any complete mathematical system therefore contains true statements that are both provable and unprovable, which is a contradiction — a paradox.

Recall the difference between my thought experiment and the classic thought experiments demonstrating the fragility of inductive inference. The classic thought experiments are about generalisations that happen to be false. They are false because they are generalisations about problem-spaces that do not exist. The turkey will not be fed everyday, as his problem-space involves a recurring celebration called Thanksgiving. My thought experiment differs from the classics, it is about a generalisation that happens to be true. It is a true generalisation about a problem-space that cannot be proved using the problem-space; it is true but unprovable. Gödel’s Incompleteness Theorems are about epistemology. They are about epistemology because mathematics is about knowledge: mathematicians study the knowledge of mathematical abstraction.

A set of axioms is a set of generalisations about a portion of the abstract problem-space. Mathematicians regard the axioms to be self-evidently true, but they are not. They are the result of conjectures. Each axiom, being a generalisation, is either false or true but unprovable on the abstract problem-space. Gödel’s second theorem is a proof that foundationalism applied to mathematics is paradoxical. Mathematical knowledge is fallible. Gödel’s first theorem reveals that mathematical knowledge is always incomplete. Progress in mathematics proceeds the same as all knowledge, through a process of conjectures and refutations. Some theorems are true but unprovable because they reveal a new dimension to the abstract problem-space. Any theorem that appears to defy proof may, with reasoned explanation, be conjectured as a candidate axiom. Consistent with the Principle of Dimensionality, all explanations of the abstract problem-space must be extra-dimensional, and any attempt to prove an axiom using the abstract problem-space is an attempt at inductive inference that is doomed to fail. All axioms that appear to be true are therefore unprovable, and fallible.

In 1936, Emil Post, Alonzo Church, and Alan Turing, working separately and using different techniques, each demonstrated that Hilbert’s decidability requirement or decision problem was also untenable. The decision problem is the requirement that there be an algorithm for determining beforehand whether a mathematical theorem has a solution. Turing’s approach is interesting as it depicts algorithms rightly as physical processes that we can also use to examine my thought experiment. In a thought experiment of his own, Turing devised a simple computing machine that is now known as a Turing machine. The machine is exceedingly simple, but it is powerful; it is universal. Consisting of a tape that can be of infinite length and a reader, the machine reads cells on the tape that either contain a symbol or are blank. The reader has a memory state and a table of instructions. The operator sets the initial memory state for the reader, which then processes the input on the tape by moving from one cell to the next, back and forth, reading, writing, and blanking cells as the table of instructions specifies. When the reader halts, the resulting tape has completed its transformation from input to output. The symbols may be thought of as one’s and the blank cells as zeroes — the machine is digital. Turing proved the machine’s universality by showing that each Turing machine could simulate another Turing machine and the problem it had been tasked with. Since all classic computers operate the same as Turing machines, differing only in speed and size of memory state, computation itself is universal. But what functions are computable? The answer to this question addresses the decision problem.

A computable function is one in which a given input sees the Turing machine eventually halt on an output, so we may recast the decision problem as The Halting Problem to discuss Turing’s paper. A special purpose Turing machine could in theory be built that simulates other Turing machines and returns a one if the simulation halts and a zero if it does not halt. Assuming a machine like this exists, Turing reasoned that if you could construct a situation in which the machine cannot determine halting, then the decision problem would be untenable. Invoking the Liar’s Paradox, Turing wrapped the machine in another Turing machine that lied — it would halt if the special purpose machine returned a zero and run forever if it returned a one. Turing then fed the lying machine a description of itself: If the machine halts then it runs forever, and if it runs forever then it halts. The existence of such a machine is paradoxical, so there cannot be a method for determining halting beforehand, making the decision problem untenable.

We can take my thought experiment of you, the pirate, engaging in inductive inference and recast it using Turing machines. Let’s say that there is a Turing machine that records all of your voyages on a tape. The machine marks a 1 on the initial cell to document your place of origin and then moves right to the next cell, representing a fixed sailing distance. If you are back at the origin the machine marks a 1, otherwise a 0, and then continues to the next cell to the right. The machine continues to the right marking 0’s and 1’s until you find an edge to the Earth. Then the machine marks a 1, moves left one cell to mark another 1 and halts. Let’s now say there is a decision machine that takes your voyaging machine as its input and runs a program that looks for cells that read 1 in which the preceding cell also reads 1. Such a combination of machines cannot exist, as the voyaging machine’s program needs to be unobservable to replicate the situation of you, the pirate, on the open seas, unbeknownst to the true nature of your predicament. The voyaging machine, given a blank tape (consisting of zeroes), will be provided with a table of instructions that marks 1’s to the tape at equidistant intervals. Fed a description of the voyaging machine, the decision machine will be given the information beforehand that the voyaging machine will never halt. But to program the voyaging machine, we must necessarily know that it will not halt: this is a paradox. There are two ways to avoid the paradox, but each reveals the infinite regress. In the first situation, there is no voyaging machine, only a tape of infinite length that contains the input of zeroes and ones — a predefined infinite regress. In the second situation, there is a voyaging machine, only its program is specified by another voyaging machine, and its voyaging machine’s program is specified by still another voyaging machine, ad infinitum. Your inductive inference remains true but unprovable using Turing machines. Contrast this with your explanation that the surface of the Earth is wrapped around a sphere: you now have a number of computable functions about spheres that will produce outputs about your predicament.

Recall the universality of Turing machines. Turing conjectured that a Turing machine can compute all numbers that would be regarded as ‘naturally’ computable. At face value, this sounds innocuous, almost tautological. Commonly referred to as the Church-Turing thesis, Deutsch points out that Turing was in fact conjecturing that both computation and mathematical proof itself are physical. We can see further evidence of this using the Principle of Dimensionality, as it intertwines with both Gödel’s theorems and Turing’s conjecture in slightly different ways. Notice that Gödel’s Incompleteness Theorems do not come with their own conjecture — they are proofs and nothing more. Gödel’s proofs, like all other proofs, are in fact observations (of the abstract mathematical problem-space). And like all observations, they are theory-laden. All proofs are contingent on the truth of the underlying axioms, and the axioms, being true but unprovable, or false, are fallible. Gödel’s second theorem on inconsistency is paradoxical as it is a proof in search of foundationalism: any and all generalisations on the abstract mathematical problem-space will each require an explanation, for an axiom, to expand the problem-space via another dimension. Gödel’s first theorem on incompleteness reflects this: there will be true but unprovable statements that must be classified as new axioms of a new system — the infinite regress reveals an infinity of new dimensions to the abstract mathematical problem-space. Turing undecidability is different. It is paradoxical but it does not reveal the same kind of infinite regress. Each function that is undecidable will result in the Turing machine running forever to no avail. This is an infinite regress, but it occurs within the same dimension — it has nothing to say about the completeness of computation itself. Computation is complete so long as physics is complete. Consistent with Turing’s conjecture, the Principle of Dimensionality requires improvement to computation in the direction of another physical dimension. And this is precisely how quantum computation is an improvement over classical computation. Given enough time, a classical computer can solve any function that is conceivably solvable within the laws of physics. A quantum computer is the same. However, it can derive quicker solutions by dividing sub-computations among multiple physical dimensions.

Given the thought experiment, and its relationship to mathematics and computation, there appears to be a relationship between infinite regress, induction, and self-reference paradoxes. I therefore conjecture that infinite regress, induction, and paradox are three sides of the same coin. As depicted in Figure 3, I refer to this relationship as The Trinity. The trinity is not mystical, it is a warning sign to change your dimension.

A New Schema

Deutsch defines creativity as “the capacity to create new explanations.” The Principle of Dimensionality implies an alternate definition: the capacity to represent extra-dimensionally. New explanations are the artefacts of creativity. An extremely brief and incomplete history of creative progress in painting is revealing: Painting in Antiquity was 2-dimensional, the 3rd dimension featured in the Renaissance with the invention of perspective, Impressionism deconstructed the essence of scenes, Post-Impressionism evoked emotion from colour and composition, Cubism explored multiple physical dimensions simultaneously, Surrealism romanced the paradox, Abstract Expressionism depicted fractional dimensions, and Pop Art dealt with fungibility. Each school of painting provides a new explanation for what constitutes a painting, and in most cases the invention of a new dimension is obvious.

A new schema for the growth of knowledge is illustrated in Figure 4. As with the Deutschian schema, knowledge growth is problem-driven. Creativity is used to differentiate problems from misconceptions, as well as to conjecture and criticise new theories. Where possible, new theories are empirically criticised via falsification. Both problem-spaces and explanations demonstrate reach. Problem-space reach captures new and existing problems that are discovered to be true but unprovable generalisations on the new problem-space, or false (generalisations on a competing problem-space that cannot exist given the explanation). Explanation reach reveals new problems in the form of questions that emerge from the new explanation. Each question is an apparent generalisation or constant with the potential to be an induction that is true but unprovable on an existing or new problem-space, or is false and an epistemological dead-end.

The process is reciprocated for a problem that differentiates to a misconception, only this time the growth of knowledge proceeds by improving an existing theory. Improving an existing theory can occur along two pathways. The first is the refinement of an existing explanation in which the coupling between an explanation and its corresponding problem-space is tightened — the explanation is made ‘harder to vary’ by bringing it closer to the problem-space, closer to reality. The second is an expansion of the problem-space reach of the theory in which the explanation is also made ‘harder to vary’ by solving more problems.

Regressive Inducto-authoritarian Paradox

Nature abhors The Trinity. It appears as a boundary between our existing knowledge and reality. Upon his return to philosophy in 1929, Wittgenstein was appointed a lecturer at Cambridge and a fellow of Trinity College, having submitted the Tractatus as a dissertation to receive his PhD. But Wittgenstein, realising that he had been wrong about his conclusions, had moved on from the Tractatus. Over the ensuing years, in and out of Cambridge, he developed a language theory in which meaning is dependent on the ‘language-game’ at play. A language-game is a simple thought experiment designed to illustrate a specific situation in which language is used. Although Wittgenstein had learned from his failure to ground language in a foundation, he remained entranced with The Trinity.

Language amounts to a search for fungibility. When we communicate with one another we try to do so on like-for-like terms. We cultivate shared understanding when this is achieved. Breakdowns in communication occur when a sufficient fungibility is not reached and understanding is not shared. Mathematics differs from language in that mathematics amounts to the formalised study of fungibility. And computation amounts to fungibility limited by physics. Creativity in computation is also limited by physics. Creativity in mathematics is limited by physics when comprised of proof using existing axioms, but is untethered when it results in conjectures about new axioms. The rules of language are less stringent still, and creative expression more fluid: we often allow the same words to mean different things, depending on the context. Each context may be thought of as a language-game. Wittgenstein constructed each of his language-games to illustrate his charge that “in most cases, the meaning of a word is its use” — the meaning of the words emerge from the game itself. But this is akin to a problem-space divorced from its underlying explanation. A Wittgensteinian language-game is inductivism applied to language. And as with all induction, each language-game reveals infinite regress and paradox. Wittgenstein’s followers revelled in his profound paradoxes. The celebration has since infected much of philosophy, and may, in part, be responsible for the emergence of modern-day relativism.

Science and mathematics have not escaped observance of The Trinity. Gödel’s work has been touted as both the explanation, and the reason why there can be no explanation, for human consciousness. Likewise, both Gödel incompleteness and Turing uncomputability have each been used as reasoning both for and against the quest for artificial general intelligence. But thinking and consciousness cannot be derived from The Trinity. Consistent with the Principle of Dimensionality, an explanation for thinking and consciousness must acknowledge the centrality of creativity. It is possible, however, for human’s and human institutions to simulate The Trinity and enter a delusion. Each simulation must invoke authoritarianism when the lies from the resulting paradoxes become untenable and the spell is at risk of breaking. Authorities that step in, do so to prevent creativity and enforce observance of The Trinity. Bringing new paradoxes to the simulation, authorities propagate a continuation of the delusion and the lies. So long as authorities step in to prevent creativity, the process feeds upon itself and we enter an infinite regress. I call this process a Regressive Inducto-authoritarian Paradox, or RIP.

David Deutsch offers many definitions for what he calls The Beginning of Infinity. All of them are reflections of the notion that potential knowledge is infinite; that infinite progress is possible and that we are therefore always at a beginning. The Beginning of Infinity is a progression through an infinity of dimensions. When we ignore the Principle of Dimensionality we find ourselves veering off the path to infinity towards a different kind of infinite; we find ourselves at The Beginning of Infinite Regress. The acronym RIP is metaphorically significant. A rip current is a panel of water that moves at speed away from the shoreline. Both the collision of waves and the structure of the ocean floor, including sandbars and man-made structures, congregate together to propel the rip away from the beach. We may think of the primary wave system as the Beginning of Infinity and the rip current representing the Beginning of Infinite Regress. There are two ways to escape a rip current. The first is to remain calm and allow it to run its course, staying hopeful that the primary wave will again takeover and bring you back towards the shoreline. But you never know how far the current will take you, and how fast. An alternate solution is to swim parallel to the shoreline, to change your dimension.

Terms

Abstract Problem-space — an abstract phenomenon in need of an explanation.

Authoritarianism — arbitrary epistemological bedrock by means of authoritarian creed.

Church-Turing-Deutsch Principle — Also known as the strong Church-Turing Thesis, the principle that computation is universal and capable of simulating any physical process.

Computation — the study of fungibility limited by physics.

Creativity — the ability to represent extra-dimensionally.

Critical Rationalism — Popperian epistemology, in which knowledge grows through a process of conjectures and refutations.

Demarcation — the imposition of a boundary to categorise concepts.

Dimension — an aspect or feature that may be depicted as a consequence to an explanation of a problem-space.

Empiricism — the misconception that sensory experience is an epistemological bedrock.

Explanation — a statement about what a problem-space is, how it works and why.

Falsificationism — Popper’s criterion for demarcation: that which is science is conceivably falsifiable via empirical falsification.

Foundationalism — the misconception that an epistemological bedrock is necessary or logically possible.

Fungibility — identical and mutually interchangeable.

Generalisation — a feature on a problem-space that displays constancy.

Hilbert’s Program — David Hilbert’s charge to solve the perceived foundational crisis of mathematics.

Induction — Any attempt to use a problem-space to explain itself.

Inductivism — the misconception that inductive inferences contain information and drive the growth of knowledge.

Infinite Regress — a series of justifications or arguments that cannot come to an end.

Incompleteness — in mathematics, the notion that a formal mathematical system can never be complete.

Inconsistency — in mathematics, the notion that a formal mathematical system cannot explain itself.

Justificationism — the misconception that knowledge can only be meaningful if it can eventually be justified by some form of authority.

Language-game — a thought experiment constructed to demonstrate a particular use of language.

Misconception — A problem that is found to be a misunderstanding of existing knowledge.

Paradox — a logical contradiction arrived at through self-reference.

Principle of Dimensionality — the principle that all explanations necessarily reside in another dimension to the problem-space to which they purport to explain.

Problem — an unexpected generalisation from an unexplained or underexplained problem-space.

Problem-space — a phenomenon in need of explanation.

Problem situation — a reference to the circumstance that a problem-solver is faced with. The problem situation is contextual and includes the problem-solver’s received wisdom. Different problem-solvers can explain the same problem-space from different problem situations.

Regressive Inducto-authoritarian Paradox — A simulation of inductive thinking in the mind of a single human or in the minds of a collection of humans via culture and institutions.

Theory — a problem-space and its corresponding explanation.

Theory-laden — the idea that there can be no such thing as pure observation; theory precedes observation.

Trinity — a boundary between reality and logical inconsistency that reveals a fundamental relationship between infinite regress, induction and self-reference paradox.

Uncomputable­ — an abstraction that cannot be accurately simulated via computation.

Vienna Circle — a group of intellectuals from the early 20th century, who would meet in Vienna, Austria to discuss and promote what would become Logical Positivism.

Summary

Empiricism, the prevailing conception for how knowledge grows, promotes observation as foundational. But empiricism, as with authoritarianism and all forms of justification, amounts to the misconception that foundations are necessary and logically plausible. Critical Rationalism, the epistemology fathered by Karl Popper and further developed by David Deutsch, avoids foundationalism and correctly describes knowledge growth as a problem solving activity through a process of conjectures and refutations. Superior though it is, Critical Rationalism in its current form is missing an element — the problem-space. The inclusion of the problem-space within the machinery of critical rationalism expands what constitutes inductive inference: inductive inference amounts to any attempt to use a problem-space to explain itself. Problems are solved when an explanation for a problem-space is uncovered. A problem-space becomes discernible when it is sufficiently explained. Explanations necessarily reside in another dimension to the problem-space to which they purport to explain — The Principle of Dimensionality. Inductive inference and the Principle of Dimensionality each appear within the search for foundations for language, mathematics and computation. There exists a relationship between infinite regress, induction and paradox — The Trinity. Romancing the trinity takes us away from the Beginning of Infinity and sets us on a path towards a different kind of infinite — The Beginning of Infinite Regress.

References

Brett Hall (bretthall.org)

David Deutsch, The Beginning of Infinity (Penguin Books, 2011)

David Deutsch, The Fabric of Reality (Allen Lane, 1997)

David Deutsch, The Mathematicians’ Misconception (Transcript of a talk given at the International Centre for Theoretical Physics, Trieste, Italy, on the occasion of being awarded the Dirac Medal, March 14 2018)

David Deutsch, “Quantum Theory, the Church-Turing Principle and the Universal Quantum Computer,” Proc. R. Soc. Lond. A 400 (1985): 97–117.

Karl Popper, Conjectures and Refutations (Routledge, 1963)

Karl Popper, David Miller, Popper Selections (Princeton Paperbacks, 1985)

Ludwig Wittgenstein, Tractatus Logico-Philosophicus (Kegan Paul, 1922)

James Klagge, Simply Wittgenstein (Simply Charly, 2016)

David Edmonds, John Eidinow, Wittgenstein’s Poker (Deutsche Verlags-Anstalt, 2001)

Paul Raatikainen, “Hilbert’s Program Revisited,” Synthese, 137 (2003): 157–177.

Charles Petzold, The Annotated Turing (Wiley Publishing, 2008)

Dale Jacquette, Henry Johnstone, “Dualities of Self-non-application and Infinite Regress,” Logique Et Analyse, Nouvelle Serie, 32, 125/126 (1989): 29–40.

Stanford Encyclopedia of Philosophy (plato.stanford.edu)

Wikipedia (Wikipedia.org)

Nassim Nicholas Taleb, Fooled By Randomness (Random House, 2001)

Stephen Hawking, A Brief History of Time (Bantam Dell Publishing Group, 1988)

--

--