The Truth Machine: Can AI Replace Thought?

Why We Seek Truth at the Atomic Level

Haris Krijestorac
Predict
13 min readJun 25, 2023

--

Noam Chomsky’s recent New York Times opinion piece on The False Promise of ChatGPT offers a contentious perspective on the question of how much AI is capable of contributing to our understanding of the world. In short, Chomsky argues that, while AI tools such as ChatGPT may synthesize information and extract patterns in an efficient manner, they will always fall short of truly understanding phenomena. That is, while AI might be able to identify associations between quantified variables, only humans can truly develop explanations behind these patterns. This human insight is precisely what we have traditionally considered thinking, and can thus never be replaced by even the most technically sophisticated engine of computation.

Although Chomsky describes the limitations of AI in a manner that is worth consideration, his characterization of its “false promise” is somewhat implicit, if not underdeveloped. Moreover, his analysis does not explore the reasons why we make or believe in this implicit promise. To evaluate the legitimacy of this promise, we must thus go deeper by (1) formalizing the promise of AI, and (2) exploring the ideological underpinnings of said promise.

The Promise of AI

To understand AI’s promise, we can go straight to the source and ask ChatGPT itself. The below discussion with this tool illustrates the current worldview concerning human vs. AI competencies.

Can AI Replace Thought?
Can AI Ever Replace Human Thought?

Although ChatGPT admits to limitations of AI in light of the current technological landscape, it remains agnostic with regard to its theoretical potential. Hence, the area of ambiguity is not whether AI can currently benefit from human input, but whether the ideal AI has any need for such insight.

The 1997 match between IBM’s chess engine Deep Blue and world chess champion Gary Kasparov is an illustrative case study in comparing human and AI performance. Although Kasparov lost the match, he enjoyed a small victory by winning its first game. Reflecting on the implications of his minor win in light of his greater loss, Kasparov argued that having a losing record against AI is not indicative of AI’s supremacy in the domain of chess. Rather, as long as humans can outperform AI once, there is value in human ingenuity in any domain. This is because AI tools such as Deep Blue always operate at maximum potential, while human beings are subject to distraction, fatigue, fluctuations in confidence, and other variables that may affect performance. Hence, as long as the occasionally optimal human can outperform the consistently optimal AI, human thought is irreplaceable.

Accepting the burden of performance on AI suggested by Kasparov, one may ask— Is the ideal endpoint of AI to indeed completely outperform and thus replace us? To address this question, we must first understand why and how we might even have the impulse to construct a machine aimed at such an ambitious outcome.

From Data to Truth?

I argue that the promise of ChatGPT and other AI tools is that the extraction and computational synthesis of data can, with minimal human intervention, advance knowledge in any discipline concerned with the pursuit of Truth. As depicted in the Truth Machine diagram, this promise relies on the assumption that Truth can be distilled from raw data, with no loss of purity in the filtration process. Under such an assumption, one might associate the feats of AI as those of not only technology or engineering, but of knowledge production.

The Truth Machine: Extracting Truth from Data (inspired by the DIKW pyramid)

The quest to replace human insight entirely with computation harks back decades. An illustrative example of this ideology is the 2008 article entitled The End of Theory by Chris Anderson, author of “The Long Tail”. In this piece, Anderson argues that the deluge of data will make theorization (i.e., human input into the process of knowledge production) obsolete, as “science can advance even without coherent models, unified theories, or really any mechanistic explanation at all”.

While one might consider Anderson’s article to be a selective reference, as a researcher I have been privy to conversations with my broader community that suggest that many of us would, at least implicitly, concur with its proposition. For example, some of my fellow scientists have expressed to me fear of being replaced by ChatGPT, which can potentially scan existing literature algorithmically, conduct analysis in an automated fashion, and even write academic papers on its own. Such a fear suggests that indeed, these scholars see little or no value in human thought in light of the future of AI. In fact, the consensus in the computer science and adjacent communities is that Chomsky (whose comments on AI frankly predate ChatGPT) has held back progress with his anti-reductionist stance.

Human Thought — A “Bug” in the Truth Machine?

Despite lip service towards notions such as human-AI collaboration or AIQ, we are quick to tout the advantages of pure computation (e.g., efficiency, accuracy, objectivity), while remaining comparatively vague when espousing our own contributions to the Truth Machine. As evidenced in my discussion with ChatGPT, the role of human input seems to be characterized in terms of abstract romanticisms about our “complex cognitive processes”, and does not proclaim any concrete advantages. In domains where ChatGPT admits to its limitations, it alludes to areas of subjectivity or individual preference; However, the notion of Truth with a capital T is associated more with objectivity and generalizability, rendering the aforementioned subjectivities superfluous to its pursuit.

Given the idealization of computation juxtaposed with apathy towards the value of human ingenuity, one can argue that we see the technical process of AI as, while temporarily limited, theoretically perfectible — in contrast, our human flaws are presumably inescapable. For this reason, we do not only minimize the value of our thought in the Truth Machine, but arguably see ourselves as a “bug” in an otherwise infallible engine. That is, we assume that if such a Machine were to be perfectly designed and have universal data, it would indeed be superior to us even by the Kasparovian standard. Hence, insofar as we do insert ourselves into this Machine, it is typically in the context of initiatives such as “algorithmic de-biasing” or “responsible AI”, which aim to compensate for flaws that we have engrained into it. Rarely do we critique the Truth Machine itself, or ascribe to it inescapable shortcomings as we tend to do with ourselves.

To understand how the Truth Machine may be limited not merely by its technical design but by its theoretical and ideological framework, I will formalize the relationship between computation and Truth in the following sections. Specifically, I will chart a path from computation to science, and then from science to Truth.

Reductionism and Science

In response to the question of “what is science?”, our tendency would likely be to offer a neutral and innocuous response, alluding to the “scientific method” and characterizing it as a straightforward, objective, and ideally reproduceable methodology. In contrast, alternative modes of inquiry such as philosophy or spirituality are seen as less formulaic, and thus less rigorous. However, such disciplines can, like science, also be defined as lenses through which one may pursue a deeper understanding of not only subjective, but of absolute reality. One might thus ask: what structurally differentiates “science” from such alternative modes of inquiry?

To distinguish science from other frameworks aimed at the pursuit of Truth, a starting point may be the distinction between the numerous sciences. If science itself were a well-defined and coherent methodology, one might accept the entirety of the field as one discipline. However, in practice we do categorize it into sub-disciplines such as biology, chemistry, and physics.

The distinction between the sciences may, through the lens of complexity theory, be viewed as differences in the degree of acceptance of the reductionist worldview. As the least reductionist science, Biology seems to be more accepting of the chaos of nature. As such, we seem to simply learn it “as it is” while limiting inquiry into why and how living beings evolved in the precise manner they did. Moving from biology to chemistry, we begin to highlight phenomena such as “interactions”, whose explanations cannot necessarily be distilled into “atoms”, which can be considered the reductionist building blocks of the field. For example, although neither Hydrogen nor Oxygen are wet, their synthesis in a two-to-one atomic ratio produces a wet molecule, whose emergent moisture can only be explained through their interaction. Next, physics concerns itself even less with such emergent properties, and tends to propose higher-order constructs (e.g., “force”) cleanly in terms of quantifiable properties (e.g., mass and acceleration of objects). Finally, Mathematics, despite sometimes not even being considered among the sciences, is in some ways the most “scientific” of them all, in that it aims to convert a set of given axioms into lemmas and theorems without any leakage of Truth in the process. Although even this endeavor can be considered dubious in light of Gödel's incompleteness theorem, the explicit reductionist objective remains.

Science and Reductionism: An Illustration

This comparison of scientific disciplines illustrates that even within the framework of what we consider rigorous Truth — as we typically would not consider Biology or Chemistry to be false or un-rigorous — there is a spectrum of tolerance towards the insistence on reality being reduced into computable axioms. Considering this spectrum, one can view the broader field of “science” as an arbitrary threshold between biology and other, less reductionist frameworks of inquiry. Hence, simply being outside of an ideologically and academically accepted threshold of reductionism seems to render an insight suspiciously distant from our ideal of rigor.

Viewing science as an arbitrary threshold of tolerance towards reductionism, one might ask why we do not expand this threshold of Truth to include alternative modes of inquiry? For instance, would we really insist that insights from a psychologist would be more rigorous if we could capture them in a CT scan? Can this biological scan be broken down into an even more reductionist framework, such as chemistry, physics, or even pure mathematics? Even the popular locution, “break it down”, implies that by distilling a phenomenon into more axiomatic elements, we arrive at a deeper understanding of it.

The limitations of science can arguably be attributed not merely to lack of data, procedural errors, or individual biases in the execution of an otherwise infallible approach. Rather, they may be attributed to loss in translation of complex phenomena into a language that can be understood by the scientific process, and which insists on packaging observed reality into quantifiable axioms that can serve as inputs into a computational methodology. One might again find relevance in the work of Chomsky, who questioned the extent to which the mysteries of nature are reducible.

Science and Truth

In noting the connection between science and reductionism, it is important to add that the zeitgeist equates science with Truth. For example, to claim that a statement is “unscientific” is to simply dismiss it as untrue. Additionally, to discuss a more abstract phenomenon, such as meditation, in terms of “the science of” it, or to claim that it is “supported by science”, is to add weight to its veracity. Conversely, it would seem bizarre to argue that a less reductionist field like spirituality substantiates the veracity of a claim within physics. Ergo, if Reductionism = Science and Science = Truth, by transitivity one can propose that Reductionism = Truth.

The reductionist worldview is conspicuously conducive to the glorification of AI, which relies on the distillation of phenomena into computable elements. Nevertheless, it remains unclear why we glorify this reductionist approach. While this complex question invites a multitude of explanations, the following sections will argue that religion and especially Christianity are important ideological antecedents to the dominant preference towards reductionism and the pursuit of Truth through AI.

Truth and God

The assumption that our human impulses pull us out of an otherwise infallible ideal of Truth requires that there exist a “fair and unbiased” vantage point to begin with. The assumption of the existence of this supreme intelligence is non-arbitrary, and it is arguably religion that inspires such a presupposition through the notion of God. With God being the arbiter of justice and omniscient source of knowledge, one might complement the prior equalities with the proposition that God = Truth.

In order to overcome our fallen state and reach Truth, science adopts a detached, arguably “Godly” posture towards nature. Rather than submitting to it and seeing it as a source of Truth, it views itself as its creator, in the sense that it perceives it as an external entity to be manipulated. The scientific emphasis on causality, about which I have previously written, exemplifies this imperative of divine control and domination over nature. This imperative extends even to the organic mind and consciousness, as outlined in AI evangelist Ray Kurzweil’s treatise “How to Create a Mind”.

Among other scientific and technological advancements, AI plays a unique role in helping us play God with Nature. Critics such as Yuval Noah Harari may disagree, claiming that as an autonomous and opaque system, AI supplants human power — that is, it is distinct from prior technological innovations such as the automobile or atomic bomb, whose functioning ultimately remains in our hands. However, I would contend that it is precisely this feature of being “beyond us” that gives AI its divine mystique and allows us to achieve Godly ascension as its creators. From a technical perspective, AI is indeed distinct from other computational methodologies, in that it is a “black box”, learning and changing with new inputs, and thus having a “life of its own”. In contrast, a statistical model or algorithm has a transparent structure that we can comprehend, therefore remaining strictly “human”. AI is consequently both of us and beyond us, earthly yet divine, in imitation of the figure of Jesus Christ.

Truth and Nature

Although the reductionist impulse equates Science with Truth, an alternative perspective might perceive the ultimate Truth as lying in the chaos of organic Nature, rather than in its scientific approximation. Indeed, this Nature is the reality we are given, and can thus be viewed as some kind of “ground truth”, to borrow AI jargon. While the ideology of science imbues us with a perception of our human models of nature being a higher Truth, these abstractions should arguably not be confused with the supreme intelligence that is God’s creation.

Aptly, the word indicating the opposite of Nature is artifice, as in artificial intelligence. Within the sciences, as we move towards reductionism, we also move towards a more artificial understanding of the world. In mathematics, the most reductionist of sciences, notions such as infinity, zero, or asymptotes might have questionable validity in the realm of Nature. Even moving the goalpost slightly towards physics, which is often represented in terms of reductionist formulae, submits to natural phenomena such as the acceleration due to gravity being 9.8 meters per second. In doing so, it momentarily abdicates the scientific imperative to construct an artificial world, and instead submits to a higher Truth in Nature, and arguably God.

Just as AI is driven by a desire to create a transcendent Truth beyond God’s nature, Christianity ostensibly demands that its followers not merely emulate, but transcend God. As Jesus dies on the cross, God is conceivably foisting His duties onto Us, as We must carry on the process of redemption. Such an ideology would consequently reject the submission of man to Nature or God’s will, but suggest instead that we construct our own Truth. Initiatives such as transhumanism can be seen as being motivated by the aforementioned drive, as they demonstrate our audacity to transcend our fallen state through our own mortal effort. In our quest to achieve transcendence via AI, we have seemingly flipped the Turing Test on its head — instead of machines being measured against their ability to emulate human intelligence, the new gold standard is for humanity to ascend through fusion with the divine Machine.

Reductionism and the Holy Trinity

Besides motivating humans towards Godly transcendence, Christianity also implies that the path towards this higher Truth is reductionism. Given the perspective that God = Truth, Christianity suggests that even this Truth can be broken down into what is known as the Holy Trinity: the Father, the Son, and the Holy Spirit. Hence, God = Reductionism.

To best appreciate the reductionism in the Christian conceptualization of God (and hence Truth), one may contrast it to alternative perspectives. For example, this reductionist conception of God is in direct opposition to the philosophy of Islam, whose most defining proclamation is that there is “no God but God” (as stated in the shahada). This worldview proposes that there is “one Truth” that transcends reductionist approximations. Accordingly, the greatest sin of Islam is shirk or polytheism, upon which Christianity would be seen as dangerously bordering. Explicitly polytheistic religions like Hinduism allow for their followers to select a God that mirrors their subjective Truth, implying that there may be multiple Truths. Hence, the notion that Truth is both singular and reducible is seemingly most aligned with Christianity.

The Future of Thought

By reducing our quest for Truth into a computational exercise, the field of AI does not simply augment the existing body of knowledge in a neutral, arbitrary fashion. Rather it redefines the scope of what we consider knowledge, based on an ideologically-driven notion of Truth (= God = Reductionism = Science).

For Thought to have a future in light of AI as a counter-force, the boundaries between these forces must be more clearly drawn. Doing so requires deeper interrogation of both of their roles in the Truth Machine. Until we do so, the question of whether AI can replace Thought will be premature, as the underlying structure of both modes of knowledge production will be obfuscated by the ideologies defining their structure, as well as our preference or aversion towards them. This article hopefully sheds light on some of these ideological underpinnings.

Future contributions to the discussion of AI vs. human thought should similarly focus on what AI fundamentally is, not simply what it currently does. Too often, non-technical discussions on AI get stuck in the loop of weighing externalities against benefits. Going deeper by theorizing on the limits of AI is particularly useful for understanding its role in the long-term, which is indeed where many of its fears and promises are situated.

To complement discussion on the ideology and structure of AI, we humans should work towards doing the same with that of our own role — that of thinking. By abdicating this responsibility and narrowing the scope of Truth into “that which can be computed”, we risk overlooking the discovery of important phenomena that might help us understand the world, such as evolution or psychological theories. As AI challenges the role of Thought, it also offers us an opportunity to develop a more rigorous characterization of the role of human input into the Truth Machine.

Along with continuing to advance AI, the scientific community has a unique obligation to articulate the boundary between the role of computation and that of thought. Given that our discipline synthesizes computation and cognition, we scientists are the last line of defense in AI’s conquest of Truth. Without seeing AI as an ally or enemy, we should take on our responsibility of defining AI’s frontiers in its ideologically-driven onslaught. We must do so not only for the sake of avoiding our own obsolescence, but in service to the advancement of knowledge.

--

--

Haris Krijestorac
Predict

Information Systems Professor and Researcher based in HEC Paris