anòmia Journal

Prolegomenon to a Treatise on ‘mathematical structuralism’, de-ontologized metaphysics of hermeneutics, and the ‘synthetic a priori’

(localization, individuation, Einstein’s relativity, neurophysics, Aristotelian realism, eliminative structuralism, cybernetic contradiction, the axiomatic method and ‘hypothetico-deductive’ suspension, the geometric logic of “topos theory” & non-Euclidean geometry, structuralism of category theory/topos theory/type theory)

By E.S.

Key ideas/Keywords: local-global navigation through Negarestani’s localization”, minimal metaphysics, individuation of potentia within a metastable system (Simondon), Einsteinian relativistic physics and its relationship to philosophical time, “a mathematical neurophysics of the phenomenology of the perceived world and common sense” (Petitot), the Aristotelian ‘in the thing’ realism of structure existing as concrete systems which “ontologically” instantiate the structure, the Axiom of Choice/Zorn’s Lemma in relation to the Banach-Tarski Paradox and the ramifications for the social sciences, the universalist unity of mathematics from the Bourbaki approach, the complementarity of ‘axiom’ and ‘theorem’, the multiplicity of ‘case(s)’ through general ‘invariants’ (topological, group-theoretic or categorical) which constrain the objects into varying algebraic structures, category theory as a philosophical structuralism, Homotopy Type Theory as a foundation for mathematics?, the Tarski axiom and Grothendieck’s universes, arithmetic becomes geometric (Alain Connes on the Natural numbers as a sheaf over a topos)[1], the eliminative structuralism whereby structure is a “shorthand” for different isomorphic systems which share patterns[2], eliminative structuralism of regimented propositions that have been de-ontologized, the ontology of the Tarski axiom

Dedicated to my mother & Max Guy


This Prolegomenon to a Treatise will examine the possibility for a foundational system which conceptually unifies ordinary mathematics (category theory, topos theory, or homotopy type theory are the options). For that matter, this treatise will discuss structure in the sense of philosophical structuralism (not to be confused with French structuralism) and if category theory provides a structuralism as Awodey suggests. Recent research after the univalence axiom has demonstrated a new contender; does Homotopy Type Theory provide a foundation for mathematics? We will borrow Negarestani’s notion of ‘localization’ and try to provide a more rigorous context for a ‘site’ directly within topos theory, which has ramifications for any ontological framework. Moreover, we will discuss the necessary background ontology to any foundational ‘topos’ which will depend on the use of the Tarski axiom.

We will address the issue of the “synthetic a priori” and whether we can, in the Kantian manner, treat geometry as the “a priori”, either at the macro-level, as Einstein has shown, where the Minkowski spacetime develops a “spacetime” where the only invariant is the speed of light; or on the meso-level, “the relativity of simultaneity” where “distant simultaneity — whether two spatially separated events occur at the same time — is not absolute, but depends on the observer’s reference frame”; or on the micro-level (of neurons) where perception uses a “cut loci” which is a geometric form (from differential geometry) in so that we can develop a “neurogeometry of vision” as Jean Petitot has done.

Furthermore, we will attempt to articulate some “self-evident structural Truth” on the “conceptual level” itself. At the “conceptual” level of any epistemology, I will be arguing for a de-ontologized metaphysics to accommodate the ineffability of the Real and the potentia of the conatus (as in Leibniz or Spinoza), but rather than philosophize this ontology through some transcendent background (the realm of virtuality in Deleuze or ‘the ontological’ for Heidegger), we shall wish to view logico-deductive abstraction itself (like in group theory) as a means to articulate intrinsic invariants of material which need not be embedded in a surrounding Euclidean space for its measurement a.k.a thinking (see: Gauss’ Theorema Egregium). Through this immanent hypostatization of material which scientifically determines its own place/count reflexively (in the super-structuralist sense of Althusser, Ranciere, Badiou) but is, simultaneously, completely hypothetical (since it is derived from axioms), one can consider an intensive definition that is completely relative (Grothendieck’s perspective) which allows for ontological involvement, yet suspends its own background ontology through the complementarity of the duality (Laruelle) between ‘theorem’ and ‘axiom’ (or for that matter, between ‘actual’ and ‘virtual’). This Laruellean approach may be likened to eliminative structuralism, where ‘statements’ get regimented/deflated into analytic propositions which are trivially/vacuously true for a system through assuming the hypothetical ‘hypostatization’ of the theory. Because of the vacuity of any deflated statement, this approach flattens the underlying ontology of a philosophical theory through a quantifier parameter beforehand. I will discuss the presuppositions of Time which is based on a real-valued function which is parameterized and argue against ‘the time of philosophers.’

We will still attempt to demonstrate the underlying ontology of ‘technique’ charting from the father of individuation Gilbert Simondon; to the Deleuzian system of becoming from the Idea (“The Ideal Synthesis of Difference” in Difference & Repetition); to Laruelle’s de-ontologized material experimentation uni-lateral to the Real; to Rene Thom’s discussion of the aporia of the technical opposition between continuity and discontinuity. The de-ontologized metaphysics of Laruelle, which is axiomatically derived (as opposed to a theological doctrine or philosophical transcendence) is explained here:

“Philosophers, Laruelle insists, do not know what they are doing. They are never doing what they say or saying what they are doing — even and especially when they purport to be able to legitimate their philosophical decisions in terms of some ethical, political or juridical end. The theoreticist idealism inherent in decision is never so subtle and pernicious as when it invokes the putative materiality of some extra-philosophical instance in order to demonstrate its ʻpragmatic worthʼ. To condemn Laruelle for excessive abstraction on the grounds that the worth of a philosophy can only be gauged in terms of its concrete, extra-philosophical (e.g. ethical, political or juridical) effects is to ignore the way in which extra-philosophical concretion invariably involves an idealized abstraction that has already been circumscribed by decision… In non-philosophy, radical axiomatic abstraction gives rise, not to a system or doctrine inviting assent or dissent, but to an immanent methodology whose function for philosophy no one is in a position to evaluate as yet. Ultimately, then, non-philosophy can only be gauged in terms of what it can do. And no one yet knows what non-philosophy can or cannot do. (Brassier 2003, 34)”[3]

At the level of physics, an Aristotelian realism makes the most sense, which is known in the sciences as “ontic structural realism”. Why? Epistemology, at the level of structure, is either “in the thing” realism, “before the thing” Platonism or “after the thing” eliminative structuralism. So if one is of the realist camp, one would say that relativity exists “in situ” on the macro scale of large bodies, but that these structures don’t appear in everyday life is due to a scale problem. Or similarly, catastrophic hysteresis means there is a lag between those infected with coronavirus between one week and next, but only when a pandemic does in fact occur. But can we accept the entities described by physics as really existing? Certainly it is true that special relativity is true as framework. But as Negarestani has explained, van Fraassen did not believe electrons really existed.[4] Negarestani has previously articulated an ontology defined by epistemology, which is not reducible to mere ‘ontological monism’. We will counter that categorical-structuralism provides the basis for a variant form of realism that is similar to “set-theoretic structuralism”, where “structures are isomorphism types (or representatives thereof) within the set-theoretic hierarchy.” But the ontological commitments of such a system need not presuppose a “rich ontology of sets.”[5] The question still remains if category theory is isomorphism invariant. It is! But one could consider topos theory instead. But there is still the question of Grothendieck universes needed for topos theory. We will not be able to rigorously consider the ontology of topos theory unless we consider the Tarski axiom which presupposes an inaccessible cardinal (i.e. infinity). But, for example, Andrew Wiles in his proof of Fermat’s Last Theorem did not need to even invoke these uncountable universes. The solution I believe to this foundational problem would be to pursue inquiry into a (further advanced) field known as ‘Homotopy type theory,” which views things as instances, known as tokens.[6]

Does the recent work in Homotopy Type Theory present an argument for structuralism, which is a completely top-down (by its intensional nature) approach to math (as opposed to bottom-up constructions)? The “ontic structuralist realist” James Ladyman has argued HoTT is a foundation.[7] Awodey has said it provides a philosophical structuralism.[8] Other examples of this notion of top-down classification in mathematics include Gaussian curvature (which is an invariant for different types of elliptic, hyperbolic, or Euclidean geometry), Klein’s group-theoretic programme of non-Euclidean geometry (which are group invariants, such as cross ratio, for different types of affine or projective geometry), and category theory. This top-down organization of mathematics presents a multiplicity of ‘case(s)’ through general ‘invariants’ (topological, group-theoretic or categorical) which constrain the (group-theoretic or categorical) objects into varying geometries (Klein’s method) or algebraic structures (category theory).

The “highest truth”, to build off Alex Boland’s point about “craftsmanship”, follows what he calls intuitive compositionality or Spinoza’s intuition. In a categorical context, objects are literally only unique “up to isomorphism” and morphisms are the generalizing tool, which leads back to the question of “the conceptual level” mentioned before. To qualify an idea by Alex Boland where he said “I believe everything is a matter of aesthetics simply because all of epistemology is autopoietic”, I would counter with “everything is a matter of aesthetics since all epistemology is conceptual. Conceptual mathematics & the axiomatic method are geometric. Therefore, everything, by way of epistemological delimiters, is a matter of aesthetics”. This is “conceptual poetry” (from the axioms) as opposed to “autopoiesis” because instead of a homeostatic system (in the case of Luhmann and the imperfect model), there is a sense of “sheen” in the multivariate calculus of a manifold in which the “geometric intuition” is pure, as if pre-constituted. Pure mathematics is a matter of aesthetics according to the mathematician G.H. Hardy. I am not arguing for Platonic Forms but rather for some form of Aristotelian realism, where logic and geometry converge, and that structure takes place in exact terms within the Real itself but requires “hypothetical” actualization, such as the thought-experiment of Einstein’s special relativity. There is no universal truth in the heavens. The Real is a necessary cause but the actuality need only be sufficient and not directly manifest (de-actualized worlds that have a unilateral source, the Laruellean Real). The conceptual invention of topological machinery, for example in a topos, allows for radical approaches between knowing subject and the ostensibly fixed object. “Base changes” allow for the freedom to move from one level of “open sets” to another in the navigation of a “topos”. The invention of this conceptual machinery which formalizes intuition at the highest level of abstraction is novel for “unify[ing] deep insights on arithmetic (number) and geometry (form)”[9]. The conceptual invention of the epistemologically unmoored geometry from the wellspring of the individuation/ontogenesis is a becoming. These ‘germs’ (in a ‘section’ of a ‘stalk’) are salient forms which have been individuated from the ‘pregnance’ emanating from ‘source-forms’ to use Rene Thom’s words. This is morphogenesis, of the realm of aesthetic “creation”, individuated from the ‘sheaf of sets’. Dynamic perspectives. But through the axiomatic nature of such a geometry, we hope to defend a de-ontologized metaphysics that is not an “abusive hypostasis.” The axiomatic method allows for pure aesthetic fabrication (in the sense of Hardy), but freestanding and geometric.

In imperfect cases, such as Luhmann’s autopoietic systems (social or dynamical), these predictive determinisms have outliers which aren’t predicted. The question ultimately remains whether or not, all systems beyond thermodynamics (such as economics, for example) are ergodic! Alex Boland has made the point that cybernetics is not ergodic. This is all to suggest that at a deeper level, there are cases when systems abide to larger “extant forms” and are not simply reducible to the recalibration of a system (predictions which only hold up to the case of 99.9%, which is why there are outliers such as the Great Depression). That is, there are forms which are algebraically precise in a geometric/topological sense (for example, see topological invariants such as compactness, that every open cover as a finite subcover) and are therefore, more intuitive. We could take an economic space and ask if it is convergent. If it is metric then the space will be compact when it is complete and totally bounded. So if we can locate some upper bound for the supply and demand, and if all the limits are contained in the metric space, then it would be complete. So the value of a product (in economics) may follow some heuristic such as the “law of diminishing returns,” but this is only a principle and not naturalized since how can we define the topology of the space of utility (in the same sense, Moore’s law is not exact in a mathematical sense either). Some truths exist in their algebraic exactness of terms, which can’t be simplified to the heuristic of a system. One example is generalizing basic arithmetic to a topos and logic becomes inherently geometric. One can think of the integers as geometric through “the semiring of tropical integers” as a sheaf of sets over a topos (Alain Connes). The case of the logic of catastrophe modeling or Monte Carlo simulation, which is the ‘autopoetic’, is prone to ‘error’ and different from the more precise geometric formalism, that is still axiomatic.

At the formal level (in the manner of Hilbert’s formal system for geometry), eliminative structuralism which is “schematic” as opposed to “assertoric,” makes the most sense epistemologically, in the sense that objects can be viewed from the exterior rather than interior, through the categorical approach: all that matters is “arrows only” (morphisms between objects). I will conclude the Treatise with a postscript which discusses the social sciences and the mathematical presuppositions of cybernetics/economics which are problematic. A Java program would not be able to compute certain functions due to uncountability. We therefore suggest to the reader interested in an alternative ontology (to this problematic one) to pursue the ontology advocated by Diedrich Diederichsen in his “Intimacy and Gesamtkunstwerk” in Kai Kein Respekt (predicated on “homosociality”).

Topos; local-global navigation through Negarestani’s “localization”

“All this work culminated in another notion, thanks to Grothendieck and his school: that of a topos. Even though toposes appeared in the 1960s, in the context of algebraic geometry, again from the mind of Grothendieck, it was certainly Lawvere and Tierney’s (1972) elementary axiomatization of a topos which gave impetus to its attaining foundational status. Very roughly, an elementary topos is a category possessing a logical structure sufficiently rich to develop most of “ordinary mathematics”, that is, most of what is taught to mathematics undergraduates. As such, an elementary topos can be thought of as a categorical theory of sets. But it is also a generalized topological space, thus providing a direct connection between logic and geometry. (For more on the history of categorical logic, see Marquis & Reyes 2012, Bell 2005.).”[10]

The foundational problem since Godel’s Incompleteness Theorem is whether any formal system is complete and/or inconsistent. A system is complete if “all truths expressible in Σ are theorems”.[11] A system is inconsistent if P is proved to be true but also not-P is proved to be true as well. Moreover, the Peano Arithmetic (derived from the Peano Axioms), if it is consistent, cannot prove its own consistency. This proof is shown by “translating the workings of Turing machines into arithmetic”.[12] Kripke-Joyal semantics has been shown to be complete.[13]Kripke-Joyal semantics views propositions as true within possible worlds: “The basic idea of the semantics is that a proposition is necessary if and only if it is true in all “possible worlds.” The idea is made precise as follows: A Kripke frame is a set W, the elements of which are called possible worlds, together with an accessibility relation R, that is, a binary relation between elements of W. A Kripke frame becomes a Kripke model when a valuation is given. A valuation val takes a world w and an atomic formula P and gives as value 0 or 1, to determine which atomic formulas are true at what particular worlds”. Each reference frame provides a truth value of whether the atomic formula is true in such-and-such world.

What is most interesting about this “possible worlds” semantics is its use within topos theory. “As part of the independent development of sheaf theory, it was realised around 1965 that Kripke semantics was intimately related to the treatment of existential quantification in topos theory.[5] That is, the ‘local’ aspect of existence for sections of a sheaf was a kind of logic of the ‘possible’. Though this development was the work of a number of people, the name Kripke–Joyal semantics is often used in this connection.”[14] Topos theory uses a an intuitionistic Kripke logic and can be visualized as follows (from Fernando Zalamea’s essay on Glass Bead):

Each “stalk” (i.e. vertical line) over A,B,C,D, in C and has a “germ” at various levels, i.e. the circles at the different levels; these are “sets” above the site. Here is the definition of a topos: “The category of sheaves of sets on a topological space is a topos. In particular, the category of sets is a topos, for it is the category of sheaves of sets on the one point space. This topos, denoted {pt}, is called the punctual topos.”[15] We can begin to dissect what a topos is; it is a “category of sheaves of sets on a site C”; for a more precise exposition of a topos and the Grothendieck topology in more general cases, see footnote.[16]

Fernando Zalamea, philosopher of math, explains:

“In a similar vein, Grothendieck toposes (categories equivalent to categories of sheaves over abstract topologies, 1962) constitute plastic sites, specifically open to dynamic variations. Grothendieck toposes unify deep insights on arithmetic (number) and geometry (form). Beyond Cantorian, classical, static sets, the objects in a topos are to be understood as generalizations of variable sets (see Figure 2). Instead of living over a rigid bottom, governed by classical logic, they live over a dynamic Kripke model, governed by intuitionistic logic. Beyond the classical example of the separated sheaf of holomorphic functions, a sheaf does not have to be separated in a general topos: points do not have to determine their associated objects. We can even imagine objects without points, defined only through flux processes. A wonderful example is the topos of actions of monoids. Such a topos has an underlying classical logic (where the law of excluded middle holds and points are essential) if and only if the monoid is a group. Thus, when we deal with structures which are monoid non-groups, the logic of their action is just intuitionistic, non-separated, closer to topological fluxions, deformations, disruptions.”

One can think of a sheaf of sets as follows. Here is “visuali[zation of] a sheaf in terms of stalks over each point of the space”[17]:

Next one can take “particular sections (associated to a germ of the stalk) corresponding to each of the red and green open sets of the space.”[18]

Then, one can “glu[e] together two sections to create a section of the larger open set that they provide a cover of.”[19]

This is a radical breakthrough in the way one can view objects in an epistemological sense. The conceptual breakthroughs of topos theory are insinuated in Negarestani’s essay “Where is the Concept?” on localization. Negarestani even uses the terminology of ‘site’ and ‘topos’ at times in the essay. One of the primary methods of epistemology in this Treatise will be modeled on this approach of the localization of the concept, which Negarestani characterizes by its epistemological depth to “unmoor” the subject from fixed points and “cancel” any archetypal relation between subject and object, immersing the subject directly within the “navigational landscape” :

“Just like the desert that is one and the same and precisely because of its homogeneity we don’t have access to its landscape, the monism of nature does not allow us to know nature without organizing an epistemic breakage. Ontologically, nature does not distinguish itself from itself. Monism is in this sense an ontological reality that demands a necessary epistemic strategy: Exactly because of this excess of informational homogeneity — a desert that is one and the same everywhere — we can’t immediately approach nature or navigate it. The nature-culture division is an epistemic division, not an ontological one. From the possibility of epistemic traction, this division is necessary and far from rigid. It provokes approaches to nature hitherto unimagined. To claim that everything is nature is at best an indulgence in the vulgarity of the obvious and at worst, a complete blindness to the epistemic conditions through which we are able to progressively make sense of nature. The bimodalization of the universal to its global and local horizons is a navigational strategy which must be conceived through a local rupture, a regional discontinuity. To create or conceive this local rupture is the basic gesture behind the formation of the concept as a local site distinguished by its qualitatively differentiated information. It is the concept as a regional breakage or local disturbance in the qualitatively homogenous information that provokes approaches and pathways impossible in the absence of the epistemic rupture. Trying to understand nature without an epistemic division, solely through the ontological monism, is an appeal to mysticism. It results either in an ineffable conception of nature or an image of nature as a reservoir of meanings and stories about itself. Once we insist that the world is a repository of meanings, that it has stories to tell the subject without any demand for the subject to create a necessary epistemic condition, then we have already committed to conserve a stable relation between the knowing subject and the world. The world is always facing the subject as if it wants to tell a story, there is no need for the subject to destabilize its given status, to epistemically uproot itself so as to procedurally navigate the landscape. The subject of the world as a ready-made object of experience and a reservoir of meanings is quite stubbornly an anthropocentric and conservative form of subject even though it claims to be completely the opposite. Localization should be understood in terms of bringing about an epistemic condition that once rigorously pursued cancels any conserved relation between the knowing subject and the world, rather than anchoring the subject in an specific place, it unmoors the subject within a navigational landscape. This is the deracinating effect that registers itself as a condition of enablement insofar as it liberates epistemic possibilities which until now had remained captives of the tyranny of here and now — that is, the knowing subject tethered to a local domain and a privileged frame of reference. Localization has obvious implications for thought not only because we ourselves are local instantiations within the terrestrial horizon, but also from an epistemological perspective: the concept as the space through which we gain traction on the world is a local horizon. As the most fundamental unit of knowledge, the concept is a local horizon, a locally organized space of information within a vast inferential economy and immersed within the general structure of knowledge. So the question of localization allows us a form of systematic study of the local context, and in particular a systematic analysis of conceptual behavior. In this sense, we can say that localization is the ultimate procedural framework of thought. It’s a procedure — even a gradualist and stepwise procedure — because, as I shall argue, the local is not rooted. Its analysis is not a matter of zooming in and out on a specific point. Instead the examination of the local requires a procedure to follow it in a navigational context, in relation with other local horizons, via different directions and addresses. No axiomatic commitment at the level of the local makes sense unless through this procedure, which is to say, only when we localize parameters and orientations or generally speaking identify what makes a local domain local. The local is not a fixed point in space, it is a mobile framework immersed within a generic environment. Its internal analysis is always coupled with an external synthesis”[20]

To get a sense of what Negarestani exactly means by ‘localization’, one would need to familiarize oneself with the work of Rene Thom. This sentence “the bimodalization of the universal to its global and local horizons is a navigational strategy which must be conceived through a local rupture” is a direct allusion to Rene Thom. Thom was famous for his Catastrophe Theory where a local rupture creates an epistemological breakage in the fabric of homogeneous information. Catastrophe Theory was thought to be a method for predicting catastrophes but the applications of the field proved to be not so fruitful. But the largest import the field had on mathematics was Thom’s invention of the term “attractor,” which would be important in the field of chaos theory. One of the main reasons predictions and applications were not obtainable was that Thom’s approach was completely topological and was an analogical hermeneutics of data and therefore a qualitative study of quantitative data. The motivation of this approach was to undertake a natural philosophy project which could unify the body and soul into a geometric object.[21] To get a better sense of the procedure of localization, we will examine one of the seven elementary catastrophes which Thom describes:


This example is known as the “cusp catastrophe”. Here a local rupture creates a sudden jump between the upper stable region and the lower stable region. For Negarestani, this divergence creates the possibility through epistemology of grasping the ontology of a monist space. In the picture as x,y change in the cusp region, z is very unstable. A good example of a catastrophe would be a Hurricane. Pictured is what is known as hysteresis or a lag between one state and the next state. A good example of hysteresis (a lag) would be the # of cases of coronavirus between one week and the next week where the # of cases detected grows as new testing detects newer cases. As Dr. Fauci said, “When you are dealing with a virus outbreak, you’re always behind where you think you are.”[23] The key to localization is the liberation of the subject from the tyranny of a fixed point or moment or perspective. As Negarestani says, “the deracinating effect that registers itself as a condition of enablement insofar as it liberates epistemic possibilities which until now had remained captives of the tyranny of here and now — that is, the knowing subject tethered to a local domain and a privileged frame of reference.” The paradigm of a “conserved relation” between subject and object in the speculative sense is overthrown. The philosophy of armchair philosophers is over. Negarestani goes on to describe the recursive anatomy of the process of “localization”:

“The question ‘where is the concept?’ demands a methodology for approaching or seeing the concept from a perspective that is adjacent, rather than from a perspective that is fixed upon it. This is by way of a recursive procedure in which we say ‘where is the concept?’ and then we repeat this procedure. In other words, localization (Where?) is combined with recursion. Since localization is an analytico-synthetic procedure that unbinds new alternative addresses for a local site/concept, then repeating localization means that new paths branch from the existing paths. This diffusion of pathways or addresses for a local horizon is registered as a ramified path structure where new alternative addresses and opportunities for local-global synthesis are progressively unfolded. As I argued the concept as a local site is fringed with possibilities of alternative reorientations. The ramified path structure suggests a form of step-wise navigation via these possible reorientations through which the concept is simultaneously studied, traced, revised and constructed. We ask ‘where is x?’ then we repeat the question over and over. Every time we localize x, we see or approach it from a new address in the environment in which x is immersed. In our example of a point, the point can be seen not only according to new variable coordinates but also according to different layers of organization. It can be conceived arithmetically, geometrically, algebraically, topologically, and so forth. Accordingly, the operation ‘localization ´ recursion’ yields new ramifying paths and in doing so broadens the scope of navigation — that is to say, the constructive passage from the local to the global. An intuitive way to understand the procedure involved with recursive localization and its imports is as follows: Think of the planet Earth. When we are standing on the surface of the planet, we are occupying a location on its geodetic surface. From a local perspective, the geodetic surface appears to be flat and not curved. While occupying this point on the surface, if we ask ‘where is the earth?’, because of our immediate access to this local point where the global properties are perceived differently (i.e. locally), we would say the earth is a fixed sphere, we might even say it is just a flat surface. However, if we launch a perspective operator — a satellite — into the orbit and take pictures of the planet, upon compiling and integrating these pictures we will notice that the planet is fully mobile and it is spheroid. However, if we repeat this procedure from a broader neighborhood and take new orbital portraits or maps of the planet, then we will observe that not only is Earth spheroid, but also it is located within a celestial system held together by the gravitational force. This is a very intuitive and rather trivial understanding of how the product of localization and recursion works: By localizing the horizon and by way of reverse-engineering it from its orbit (i.e. possibilities of reorientation), rather than from its local fixed coordinates (i.e. information readily available by occupying a local section of it), recursive localization identifies the local horizon according to the site wherein it subsists. But from the perspective of recursive localization, the site is nothing but a cascade of ramifying paths and addresses. It is in the wake of these ramifying paths that the characterization of the local, its problems, imports and implications become a matter of navigation — that is to say, analysis and synthesis, remapping and reorientation, revision and construction.”[24]

Recursive Localization becomes a process of “ramification” and “navigation”. Through identifying the horizon of my locale, one is able to “reverse engineer” the pathways possible from my address of the setting (i.e. from the restricted and bounded information of a local site) and identify more and more global contexts and addresses to conceptualize my locale. A good example, as Negarestani says in a video, is the Mobius Strip, which locally appears one way but globally another (it has only one side).[25] Another example would be a “torus”, where locally, the surface appears to resemble the space of R² the real number system, but if we zoom out, it actually appears to be a donut, which has certain homological properties different than a sphere (i.e. it has a “hole”). “The Poincare-Hopf Theorem relates local behavior of some differentiable object, namely C∞ vector fields, to the global topological structure of the manifold. Numerous theorems of this general type, relating local invariants to global structure, have been discovered. These form a large but unified subject called global differential geometry or differential topology.”[26] It is important for epistemology to integrate together all of these different scales into a synthetic navigation of the landscape. If one were to abide by classical notions of subjectivity, one would be still living in the paradigm of a “flat earth” because of tyranny of a fixed point. The epistemology that is restricted to a local domain and abuses this framework is also the culprit behind why humans thought the universe was geocentric (as opposed to heliocentric) because there was not enough information from a more universal context.

Minimal Metaphysics

“In genetic epistemology, as in developmental psychology, too, there is never an absolute beginning. We can never get back to the point where we can say, “Here is the very beginning of logical structures.” As soon as we start talking about the general coordination of actions, we are going to find ourselves, of course, going even further back into the area of biology. We immediately get into the realm of the coordinations within the nervous system and the neuron network, as discussed by McCulloch and Pitts. And then, if we look for the roots of the logic of the nervous system as discussed by these workers, we have to go back a step further. We find more basic organic coordinations. If we go further still into the realm of comparative biology, we find structures of inclusion ordering correspondence everywhere. I do not intend to go into biology; I just want to carry this regressive analysis back to its beginnings in psychology and to emphasise again that the formation of logical and mathematical structures in human thinking cannot be explained by language alone, but has its roots in the general coordination of actions.”[27]

Jean Piaget was a structuralist psychologist who had even once spoken with Dieudonne of the Bourbaki mathematicians. Piaget makes the argument that language cannot be the autonomous building block for logical and mathematical structures (see quote above). Piaget notices that children learn about basic notions of commutativity when counting pebbles. Children realize that when they count pebbles one way or another way, they get the same result. This is a basic mathematical property known as commutativity where a + b = b + a. The equation is the same whether you add a first or if you add b first. Piaget says in his Genetic Epistemology that it is with children that we have the best chance of “studying the development” of logic. “Since this field of biogenesis is not available to us, we shall do as biologists do and turn to ontogenesis. Nothing could be more accessible to study than the ontogenesis of these notions. There are children all around us. It is with children that we have the best chance of studying the development of logical knowledge, mathematical knowledge, physical knowledge, and so forth.” But Piaget even suggests further, that as soon as we locate some beginning of logic, we have to go back even further to biology and the development of biological structure. Piaget espouses a philosophy similar to Gilbert Simondon’s idea of individuation (which we will discuss in the next section). As soon as we locate some individual element, we are already too late because the process of individuation/ontogenesis has already occurred. We can trace this reasoning back to the idea that the universe that we live in consisting of discrete elements goes back to a previous universe of continuous protoplasm. Genetic epistemology has no “absolute beginning”. Rene Thom says that “the fundamental aporia of mathematics is this opposition of discrete-continuous”:

“[Thom:] For me, the fundamental aporia of mathematics is this opposition of discrete-continuous.

[Interviewer;] At this same time, this aporia has a dominant role in all thinking. You’ve said to me that, in the course of developing CT, you were trying to relate apparent discontinuities to an underlying continuity. About a century ago there existed a controversy with regard to the central nervous system: Is it continuous or discontinuous? We all know that our perceptions are discontinuous. The question was answered by anatomy: Santiago Ramón y Cajal was right. Neurons are contiguous. That may have nothing to do with human psychology. On the other hand, what is your view on this matter?

[Thom:] To begin with, I don’t agree at all with your statement that everyone knows that our perceptions are discontinuous. When I look at you, I’m seeing you in a continuous fashion!

[Interviewer:] I withdraw the term ‘perception’. I really mean ‘sensation’: The sensory receptors function in a discontinuous fashion.

[Thom:] Now you’re talking like a neurophysiologist! However, I fall back on my primary intuition. What gives you the right to say that a researcher in neurophysiology has more insight than my own first impressions? I reject that argument.

[Interviewer:] Yet certain things must be accepted as givens. A few moments ago you used the example of a film: Images pass in succession, and, thanks to a bit of ingenuity, an instrument, and to retinal after-imaging, one experiences continuity. But when I try to understand how the world works, I sense that there is a continuous basis upon which all things unfold in a more or less discrete fashion. Where does one place discontinuity with regards to the relationship between the continuous and the discrete?

[Thom:] For myself, I’m happier with the notion that the discrete is manufactured from the continuous, rather than that continuity arises from the discrete. I realize of course that the standard model in contemporary mathematics is based on the definition of number given by Dedekind, by what are called Dedekind cuts. This makes it possible, in theory, to construct continuity directly out of arithmetic, that is to say, on the basis of the discontinuous. However, this process is in reality highly nonconstructive. It amounts to saying: The real numbers can be constructed by taking rational numbers and bringing them indefinitely close to one another. Then if one makes a cut, that is to say, a division of the rationals into two classes, in such a way that each rational in the first class is less than each rational in the second class, (with the understanding that the differences between them approaches zero), this can be taken as the definition of a real number. It’s the traditional method for making Gruyère cheese*: You take some holes and start building the cheese around them. There isn’t very much cheese but there are lots of holes! Finally one ends up with no cheese at all, only holes! How is it possible, with nothing but holes, to fabricate a continuous and homogeneous paste? I must admit that it goes beyond me … The origins of all scientific thought can be situated in the paradoxes of Zeno of Elia: notably the story of Achilles and the tortoise. In it, one finds the fundamental opposition between the continuous and the discontinuous.“[28]

In a later section, contrary to Santiago Ramón y Cajal, it will be shown that perception is continuous through the research of Jean Petitot and that vision requires a geometric ideality which Petitot calls “neurogeometry” (or “neurophysics” if you like) and which can be observed experimentally at the level of neurons. I agree with Thom that the discrete is born out of the continuous. But as Thom describes the typical construction of the real numbers, the Dedekind cut, is the other way around (the continuous out of the discrete). The Dedekind cut uses two sets in order to construct an irrational number. The definition is as follows:

“Suppose that we already know about some real number x.

When we could define a pair of sets (Ax, Bx), where

- Ax is the set of all rational numbers y such that y < x, and

-Bx is the set of all rational numbers y such that y > x.

The way to think about this is that you are cutting the number line by an infinitely thin knife, at x, and Ax is all the numbers to the left of the knife and Bx is all the numbers to the right. Part of this definition is very good: The sets Ax and Bx are both subsets of rationals, so they do not directly refer to real numbers. However, the problem with this definition is that it depends on us already knowing about the number x. The idea behind Dedekind cuts is to just work with the pairs (A, B), without direct reference to any real number. Basically, we just look at all the properties that (Ax, Bx) has and then make these “axioms” for what we mean by a Dedekind cut.”[29]

Deleuze makes use of the Dedekind cut to define his ideality of difference which is the “third synthesis of time”.[30] We will suggest an alternative to this construction, in the spirit of Thom, one that moves from continuous to discrete to found geometry on the use of infinitesimals which was made rigorous in the work of Lawvere’s “synthetic differential geometry”, a new development in 20th century mathematics.

Thom argues for, in Semiophysics, “the necessity of restoring by appropriate minimal metaphysics some kind of intelligibility to our world.”[31] If we are to found our metaphysics upon some topological truths, we could start for example with the extensive use of the ‘continuous’ in much of Thom’s topology. Thom was most famous for his construction of ‘cobordism’ which was an equivalence relation between two manifolds based upon the boundary operator. The boundary operator can be thought of abiding by a similar logic to a derivative. Thom’s greatest influence in the applied sciences may be his coinage of the term ‘attractor’. The ‘basin of attractors’ is technically defined as open neighborhoods when t approaches infinity for the phase space. A neighborhood is a basic definition within topology, in order to articulate notions such as ‘limit points’ and rigorously defines the notion of closeness through the use of open sets. Open sets are the fundamental basis of topology, which we can even generalize to the cases of sheaves and Grothendieck topologies (discussed in the last section). If we are really to define some “minimal metaphysics” we would need to re-examine the notion of Time as continuously running in the manner of the real number. To make sense of the real numbers, and therefore time, we could arrive at some topological properties. The real numbers are a “connected” interval. Moreover, the infinity utilized in the real numbers is an uncountable infinity. Time as the real number system is therefore time as a continuous backdrop. Should we accept this continuum of time which is used in any system (dynamical or even not dynamical)?

Moreover, central to notions of topology are morphisms known as homeomorphism, which is a continuous bijection between two topological spaces. Discreteness can be thought of topological manifolds which have “holes” (for example, the torus has one hole). While continuity can be thought, in the study of differential geometry, as a diffeomorphism between two manifolds. Thom argues topology, in catastrophe theory, can be a method of making qualitative analysis of data based on regional discontinuities, such as a jump (as was discussed in the last section). These discontinuities, or singularities, can be studied through the method of analytic continuation (which is the means of expressing the Riemann Hypothesis for example). Through analytic continuation we are able to extend the undefined region of a singularity, such as the cusp catastrophe (as pictured below):

As Peter Tsatsanis writes, the genius of Thom’s metaphysics was to theorize the “monism of body and soul” through a geometric object. But the intelligible statements were completely qualitative, as opposed to quantitative, based on an analogical method of reasoning that Thomas Aquinas had once described. The process of morphogenesis, even in biology, is the actualization of potential states which can be articulated as the differential Process moving in the control space. (Other examples of a mathematical process include the Process which abide by Markov chains, such as stochastic processes). For Thom, morphogenesis is ontologized biology; ontogenesis is ontologized structuralism. Rene Thom goes on to say:

“Modern science has made the mistake of foregoing all ontology by reducing the criteria of truth to pragmatic success. True, pragmatic success is a source of pregnance and so of signification. But this is an immediate, purely local meaning. Pragmatism, in a way, is hardly more than the conceptualized form of a certain return to animal nature. Positivism battened on the fear of ontological involvement. But as soon as we recognize the existence of others and accept a dialogue with them, we are in fact ontologically involved. Why, then, should we not accept the entities suggested to us by language? Even though we would have to keep a check on abusive hypostasis, this seems the only way to bring a certain intelligibility to our environment. Only some realist metaphysics can give back meaning to this world of ours.”[32]

It seems that we should consider that entities imply “ontological involvement”, even linguistic entities, in order to ascribe intelligibility to the world. And Thom is saying that meaning exists through a “realist metaphysics”. In my Twitter interaction with Negarestani, Negarestani has commented upon this Thom quote, saying: “Thom was a great mathematician but he himself mistook metaphysical theses with epistemical ones. This is why his name now in math and biology societies equals the patient zero of charlatanism.”[33] In order to be sure we are not hypostatizing an ontology which is supported by theology or some abusive axiom, I would suggest that our minimal metaphysics be “de-ontologized” in the manner of Laruelle’s realism, which I will discuss in a later section on eliminativism. But the question then remains if we are of the realism camp or eliminative camp. I would argue that we accept “ontic structural realism” while simultaneously deriving a “topological hermeneutics” that is analogical in order to ascribe intelligibility to the world, but the necessity of such a metaphysics need be suspended. So the real first cause would not be Aristotelian realism of structure, even though physics is real and as a structure is ontic (i.e. there is an “a priori” on the level of relativity: light is absolute), but rather the real cause of ontology is the ineffability of the Real which can not be philosophized and can only be hypothesized in the manner of topos and Grothendieck universes which require the Tarski axiom. But even with this ontology, Negarestani has said we need to be careful: “Zalamea is a gentleman. But is this enough? Is this what we actually want: a sheaf of romantic vagaries where all math fields become the expressions of the sublime? Sheaves everywhere=tender-heart ontological hypothesization?”[34] Therefore, I would counter that the inaccessible cardinal assumed by topos theory is actually (according to my investigations) not invoked in the contemporary mathematics of algebraic geometry and number theory. Rene Thom is arguing for some minimal metaphysics. But we can, in a eliminativist way, desublimate/de-ontologize this metaphysics for the purposes of a hermeneutics (of logic, of a formal system, of physics) and treat any “ontological involvement” as merely an axiom which we do not need to invoke (e.g. Andrew Wiles’ non-usage of Grothendieck universes)! Through the axiomatic approach of topos theory, one is able to conceive of arithmetic or number as inherently geometric or form. We will discuss this breakthrough in the work of Alain Connes in a later section, whose work in non-commutative geometry (mathematical physics) has been shown as means of expressing quantum field theory (theoretical physics). Is mathematics the hermeneutics of physics? The more challenging question remains if we should accept the ontology of physics; that is do we accept that physical entities really do exist?

Individuation of potentia within a metastable system (Gilbert Simondon)

If we are to follow the thinking of Piaget who said there is no “absolute beginning” then we must consider the father of this thinking of individuation: Gilbert Simondon. Simondon’s thinking had a great influence on the likes of Gilles Deleuze, Rene Thom, Bernard Stiegler, Gilles Châtelet and François Laruelle.[35] Below Simondon explains the metaphysical logic of individuation in his seminal essay The Genesis of the Individual:

“[H]ylomorphic theory decrees that the individuated being is not already given when one comes to analyze the matter and form that will become the sutiotos (the whole): we are not present at the moment of ontogenesis because we have always placed ourselves at a time before this process of ontogenetic formation actually takes place. The principle of individuation, then, is not grasped at the point where individuation itself occurs as a process, but in that which the operation requires before it can exist, that is, a matter and a form. Here the principle is thought to be contained either in the matter or the form, because the actual process of individuation is not thought to be capable of furnishing the principle itself, but simply of putting it into effect. Thus, the search for the principle of individuation is undertaken either before or after individuation has taken place, according to whether the model of the individual being used is a physical one (as in substantialist atomism) or a technological and vital one (as in hylomorphic theory). In both of these cases, though, there remains a region of uncertainty when it comes to dealing with the process of individuation, for this process is seen as something that needs to be explained, rather than as something in which the explanation is to be found: whence the notion of a principle of individuation. Now, if this process is considered as something to be explained, this is because the received way of thinking is always oriented toward the successfully individuated being, which it then seeks to account for, bypassing the stage where individuation takes place, in order to reach the individual that is the result of this process. In consequence, an assumption is made that events follow a certain chronology: first, the principle of individuation; then, this principle at work in a process that results in individuation; and finally, the emergence of the constituted individual. On the other hand, though, were we able to see that in the process of individuation other things were produced besides the individual, there would be no such attempt to hurry past the stage where individuation takes place in order to arrive at the ultimate reality that is the individual. Instead , we would try to grasp the entire unfolding of ontogenesis in all its variety, and to understand the individual from the perspective of the process of individuation rather than the process of individuation by means of the individual”

Deleuze was particularly influenced by Simondon and Simondon’s “pre-individual fields”. Deleuze utilized this idea in his articulation of becoming-actual out of the virtual. Simondon argues that an atomist metaphysics is wrong for attempting to locate the origin of structure in the individual atom. For the atomistic individual takes place after the process of individuation has already occurred. Instead, there is a pre-individual field from which the atom was individuated. We can turn to Sir Michael Atiyah’s explanation of this problem whereby our world of the discrete (atoms) was individuated from protoplasm.[36] Similarly the Aristotelian hylomorphic theory is wrong for attempting to locate the origin before individuation in the metaphysical notions of “matter” and “form” which individuation puts into place. Simondon makes the argument that we should make the principle of individuation itself the explanation for individual elements. Individuation is “a never-ending ontological process.”[37] Deleuze’s own philosophy of actualization is based on Simondon’s principle of individuation. Refer to Deleuze’s fifth chapter of Difference & Repetition: ““The Asymmetrical Synthesis of the Sensible.” For Simondon, one can think of individuation as the process by which a metastable system, with a multiplicity of potential states, stabilizes.

This potentia is the wellspring from which the conatus actualizes. Simondon makes the argument that there is an ontology to this process of individuation. This question concerned Leibniz in his theory of bodies in motion, as Gilles Chatelet explains (for Leibniz was a deist).[38] How could stationary bodies get out of rest? This relation between states is completely non-thetic. The system is an immanent system which resolves (unlike the Hegelian dialectic) by way of its own metastability. Central to Simondon (and Deleuze) is the notion of the “singularity” of the thing, which can be thought of as the specificity of the thing. Gilbert Simondon writes:

“To a certain extent, the idea of a principle of individuation has been derived from a genesis that works backward, an ontogenesis “in reverse,” because in order to account for the genesis of the individual and its defining characteristics one must assume the existence of a first term, a principle, which would provide a sufficient explanation of how the individual had come to be individual and account for its singularity (haecceity) — but this does not prove that the essential precondition of ontogenesis need be anything resembling a first term. Yet a term is itself already an individual, or at least something capable of being individualized, something that can be the cause of an absolutely specific existence (haecceity), something that can lead to a proliferation of many new haecceities. Anything that contributes to establishing relations already belongs to the same mode of existence as the individual, whether it be an atom, which is an indivisible and eternal particle, or prime matter, or a form. The atom interacts with other atoms through the clinamen, and in this way it can constitute an individual (though not always a viable one) across the entire expanse of the void and the whole of endless becoming. Matter can be impressed with a form , and the source of ontogenesis can be derived from this matter-form relation. Indeed, if haecceities were not somehow inherent within the atom, or matter, or indeed form, it would be impossible to find a principle of individuation in any of the above-mentioned realities. To seek the principle individuation in something that preexists this same individuation is tantamount to reducing individuation to nothing more than ontogenesis. The principle of individuation here is the source of haecceity.”

Haeccity for Simondon is to be found in the principle of individuation itself and ontogenesis cannot be merely reduced to a metaphysical atomism. It is through the individuation of the cell that the cell acquires its unique singularity. It is through individuation that the work of art acquires its aura. It is through individuation, through becoming (in the Deluzian sense), that the actual becomes manifest from the wellspring of the virtual. For Deleuze, this actualization (differenCiation) is based upon the realm of virtuality (differenTiation), pre-individual singularities of the Idea. The latter is defined in the fourth chapter of Difference & Repetition and the former in the fifth chapter. The Idea is defined as the “relations of the universal” — difference-in-itself. The “a priori” for Deleuze is the “pure past,” that pre-exists the empirical present and is transcendental and metaphysical (in the pre-individual Idea), which then the third synthesis makes extensive and expresses (i.e. differenCiation) through its ”correspondence” (unification) between present, “larval subject” (“empirical ground”) and undifferenciated Idea of “pure past”[39]:

“However, actualization happens as individuating difference is the determination of differentiation in the Idea, and not specification in the concept. So determination must be distinguished from specification and Idea from conceptuality. How so? Determination is to be understood as the distinction in the Idea as the pure virtual realm, whereas specification in concepts already refers to given differentiated concepts. The Idea as pure virtual is non-conceptual difference in-itself, insofar as it is merely latent in the whole, not already specified by conceptual identity. Thus actualization achieves an individuated difference by determining an immanent difference in the Idea. Determination in actualization determines a) extrinsic difference of instances contracted in the present and b) the intrinsic difference between degrees of contraction in memory; the difference between present and past is determined as the difference between extensive repetition related to successive instants (living present of habit, expressing thought or larval subject) and intensive repetition of co-existing levels in the past (past as whole, contraction of time (pure past) by space / dilution of space by time (pure past) / expressed Idea of the virtual undifferenciated being). (Deleuze 1968: 114, 1994: 84). It is a third synthesis of these two moments, as expressing and expressed which guarantees their correspondence, and their unitary being. Thought determines in the passivity of the larval subject (present as contracted past, empirical ground), the Idea is undifferenciated being (pure past as transcendental/metaphysical ground); between them is the pure empty form of time which transcendentally guarantees that the indeterminate (Idea) can become determinate (Deleuze 1968: 220, 1994: 169). This is a purely logical, non-chronological time void of empirical content (living present of habit) or metaphysical substance (contractions/dilations of ontological memory) and guaranteeing the correspondence between thinking and being, expressing and expressed. Brassier writes:

“Accordingly, it establishes the correlation between the determination of thought as individuating difference borne by the intensive thinker, and the determinability of being as differentiated but undifferenciated pre-individual realm. Thus it is the third synthesis of time which accounts for the genesis of ontological sense as that which is expressed in thought, and which relates univocal being directly to its individuating difference as the expressed to its expression. In this regard, it is indissociable from the transcendent exercise of the faculties through which the Idea is generated.” (NU: Pg 179)”[40]

The future, Deleuze’s third synthesis of time, is defined as the ‘fractured I’ through “Kant’s definition of time as pure and empty form, Hölderlin’s notion of ‘caesura’” and the mathematical definition of Dedekind cut.[41] Daniela Voss argues that “of central importance is the notion of the cut, which is constitutive of the third synthesis of time defined as an a priori ordered temporal series separated unequally into a before and an after.”[42] The future (or third synthesis) is the means of differenCiating the pre-Individual Idea, the latter being ideal DifferenTiation (the 2nd synthesis) of the realm of problematic multiplicity/mathematical differentials/pure difference. DifferenCiation is a psychic Simondonian individuation (based on biology), linking “the expressed to its expression”, “thinking and being, expressing and expressed”:

““Thus Deleuze’s account of spatio-temporal synthesis begins by ascribing a privileged role to organic contraction in the 1st synthesis of the present, proceeds to transcendentalize memory as cosmic unconscious in the 2nd synthesis of the past, and ends by turning a form of psychic individuation which is as yet the exclusive prerogative of homo sapiens into the fundamental generator of ontological novelty in the 3rd synthesis of the future… Transcendental access to the sense of being is internalized within experience through the transcendent exercise of the faculties that generates Ideas as the intensional correlates of larval thought (albeit a ‘sense’ which is indissociable from non-sense).” (NU: Pg: 201)”[43]

The Dedekind cut is defined as a virtual point (irrational number or cut) that exists between two sets of rational numbers (see last section). The Dedekind cut is discussed in Chapter 4 of D&R on ‘differenTiation’ :

“Three principles which together form a sufficient reason correspond to these three aspects: a principle of determinability corresponds to the undetermined as such (dx, dy); a principle of reciprocal determination corresponds to the really determinable (dy/dx); a principle of complete determination corresponds to the effectively determined (values of dy/dx). In short, dx is the Idea — the Platonic, Leibnizian or Kantian Idea, the ‘problem’ and its being. The Idea of fire subsumes fire in the form of a single continuous mass capable of increase. The Idea of silver subsumes its object in the form of a liquid continuity of fine metal. However, while it is true that continuousness must be related to Ideas and to their problematic use, this is on condition that it be no longer defined by characteristics borrowed from sensible or even geometric intuition, as it still is when one speaks of the interpolation of intermediaries, of infinite intercalary series or parts which are never the smallest possible. Continuousness truly belongs to the realm of Ideas only to the extent that an ideal cause of continuity is determined. Taken together with its cause, continuity forms the pure element of quantitability, which must be distinguished both from the fixed quantities of intuition [quantum] and from variable quantities in the form of concepts of the understanding [quantitas]. The symbol which expresses it is therefore completely undetermined: dx is strictly nothing in relation to x, as dy is in relation to y. The whole problem, however, lies in the signification of these zeros. Quanta as objects of intuition always have particular values; and even when they are united in a fractional relation, each maintains a value independently of the relation. As a concept of the understanding, quantitas has a general value; generality here referring to an infinity of possible particular values: as many as the variable can assume. However, there must always be a particular value charged with representing the others, and with standing for them: this is the case with the algebraic equation for the circle, X² + Y² — R² = 0. The same does not hold for ydy + xdx = 0, which signifies ‘the universal of the circumference or of the corresponding function’. The zeros involved in dx and dy express the annihilation of the quantum and the quantitas, of the general as well as the particular, in favour of ‘the universal and its appearance’. The force of the interpretation given by Bordas-Demoulin is as follows: it is not the differential quantities which are cancelled in dy/dx or 0/0 but rather the individual and the individual relations within the function (by ‘individual’, Bordas means both the particular and the general). We have passed from one genus to another, as if to the other side of the mirror: having lost its mutable part or the property of variation, the function represents only the immutable along with the operation which uncovered it. That which is cancelled changes in it, and in being cancelled allows a glimpse beyond of that which does not change. In short, the limit must be conceived not as the limit of a function but as a genuine cut [coupure], a border between the changeable and the unchangeable within the function itself. Newton’s mistake, therefore, is that of making the differentials equal to zero, while Leibniz’s mistake is to identify them with the individual or with variability. In this respect, Bordas is already close to the modern interpretation of calculus: the limit no longer presupposes the ideas of a continuous variable and infinite approximation. On the contrary, the notion of limit grounds a new, static and purely ideal definition of continuity, while its own definition implies no more than number, or rather, the universal in number. Modern mathematics then specifies the nature of this universal of number as consisting in the ‘cut’ (in the sense of Dedekind): in this sense, it is the cut which constitutes the next genus of number, the ideal cause of continuity or the pure element of quantitability. In relation to x, dx is completely undetermined, as dy is to y, but they are perfectly determinable in relation to one another. For this reason, a principle of determinability corresponds to the undetermined as such. The universal is not a nothing since there are, in Bordas’s expression, ‘relations of the universal’. dx and dy are completely undifferenciated [indifferencies], in the particular and in the general, but completely differentiated [differenties] in and by the universal.[44]

At Deleuze’s heart, he has an idealism in defining both the differential unconscious of (the pure past) and the ‘fractured I’ (of the future) upon the ideal “static” continuous number which is, the “universal in number” — not to be thought of as individual elements but rather the “relations of the universal”, i.e. pure differentials. Continuity belongs to “the realm of Ideas” insofar as it is not dependent upon the senses or geometric intuitions, i.e. the ideal cause of continuity is undifferenciated and undetermined with respect to (individual) x or y (dx to x, or dy to y). As an “ideal cause of continuity,” it is completely determined and “differentiated in and by the universal” (dx to dy). This may sound like Alain Badiou’s critique of Deleuze where he makes him “a philosopher of the One.”

But we are not critiquing the idealism of Deleuze in this Treatise (we will address Laruelle’s de-ontologization of idealism in a later section). We are merely pointing out that any kind of logic of the individual should not be based on atomistic propositions, for these terms have already been individuated. “The tree has a trunk, branches and leaves.” Yes, this statement is true, but are these the fundamental building blocks? The tree was born out of an acorn and its interactions with the earth! And what of the soil? Piaget believed cognitive functions are epigenetic.[45] But also, there is no “absolute beginning” and basic logic which can be traced to the child’s development, its coordination of actions, can be traced even further back to biology. We are hoping to found logic upon geometry itself, as Thom had proposed.

the Aristotelian realism of structure existing as concrete systems which “ontologically” instantiate the structure; Einsteinian relativistic physics and its relationship to philosophical time

“The in re structuralism[5] (“in the thing”),[5] or modal structuralism[4] (particularly associated with Geoffrey Hellman),[4] is the equivalent of Aristotelian realism[9] (realism in truth value, but anti-realism about abstract objects in ontology). Structures are held to exist inasmuch as some concrete system exemplifies them. This incurs the usual issues that some perfectly legitimate structures might accidentally happen not to exist, and that a finite physical world might not be “big” enough to accommodate some otherwise legitimate structures.”[46]

““Il n’y a donc pas un temps des philosophes.” Einstein’s reply — stating that the time of the philosophers did not exist — was incendiary. What Einstein said next that evening was even more controversial: “There remains only a psychological time that differs from the physicist’s.””[47]

In the early 20th century, there was a debate between Einstein and Bergson. Einstein argued that the time of philosophers did not exist and said there were only two modes of time, psychological, which erred, and the time of physicists. According to Einstein’s relativity, time is dependent upon one’s inertial frame of reference. And the only thing that is absolutely invariant is the speed of light c.


For each inertial frame of reference, here Frame A and Frame B, the only thing invariant is that the frame is 90 degrees (pi/2 radians) and the light cone appears at 45 degrees. As a Stanford course explains: “In the following figure we display several different inertial reference frames as observed in Alice’s frame. Notice that the lightcone will always appear at 45 degrees in each of these frames, that is in each frame light is observed to propagate at c.” So space and time, which are an interwoven four-dimensional Minkowski spacetime (which is a 4-coordinates hyperbolic geometry) are relative to light![49] And the speed of light c is the only absolute!


Each vector is the velocity at which one is traveling relative to the speed of light. “Notice that for reference frames which move very fast the ct’ and x’ axes approach each and would actually meet for frames moving at c” So something moving at the speed of light would appear “lightlike” (as opposed to “timelike” or “spacelike”). Future and past are delimited by lightcone, which is the speed of light c = 299792458 m / s.

Einstein’s argument was that the time of philosophers did not exist. Atemporal time as “backdrop” does not exist. Einstein hated Bergson’s idea of duration. We would also argue that Heidegger’s time is too idealist and his metaphysics of time is non-existent according to the physical reality. The problem of his ontology is to found a temporality upon purely psychological notions. Jean Wahl had criticized Heidegger for trying to create a “Nunc Stans” through a unification of his three ekstaces of time (falleness, thrownness and projection). Space and time can be only experienced through the embodied vectors of ct and x. The ramifications this has for epistemology are tremendous. Our finite world is too small to exemplify these macro structures, but they really do exist once we are at that scale and are exemplified in a completely “in situ” manner. Therefore the only means of comparing two spaceships traveling at different speeds would be to compare their respective vectors to the speed of light. Light is the only absolute. In this sense, there is an a priori faktum, namely the absolute which is the speed of light. The hyperbolic geometry, whether the Klein model or Poincare half plane, usually has an absolute which is the “point at infinity.” For the Minkowski model, the speed of light is taken to be this absolute. But on a micro- or meso- level can we conceive of a parameterized time as we have in the past section where we discuss Rene Thom. Aristotle viewed time as a series of ‘now-points’. Is not the axis of time equivalent to the real number line? In the case of the stock market, we have a stochastic process which is unfolding in a continuous way. So time as a running variable does not seem to be refuted by Einstein’s 4 dimensional model. Merely, that the experience of time is relative to one’s velocity. But the dimension of time (the four-dimensional vector v = (ct, x, y, z) = (ct, r)) is still “continuous.” In this sense we can still correctly assume the time of Simondon’s individuation where a cell individuates and becomes formed in real time (some continuously running variable such as t).

Another question is the issue of particle physics. Zeno’s Paradox presents the problem of infinite subdivision. If we follow the physicists, is the universe constituted of subatomic particles? Negarestani has been careful to make the point that atoms in the colloquial sense are not the same as atoms in the scientific sense. He has said van Fraassen asks whether or not electrons really do exist. From my own conversations with a mathematician, I have gleaned that two electrons are identical, but no two protons are identical. Can we make claims about the identity of two things, if we are not even allowed this relation of identity? I will address the problems of object oriented programming and cybernetics in the final section of this Treatise, whereby virtual objects are constituted and compared to one another and discuss the problems of computing when the computer needs to compute a function which is uncountable.

Can we accept the structure (expressed by mathematical physics) that has been deployed in physics as really existing? For example, string theory makes use of the notion of compactification into a string thanks to the work of Kaluza. Another mathematical theory employed in theoretical physics is group theory, which is drawn upon in particle physics, specifically, spin of subatomic particles through the 3D rotation group SO(3). We leave these questions of the existence of physical entities to philosophers of science and we will choose to focus instead on the questions of structure (its epistemology and ontology) in the philosophy of mathematics. We will argue that ‘neurophysics’ does in fact exist though.

“A mathematical neurophysics of the phenomenology of the perceived world and common sense” (Petitot)

“In other words, it is only by restricting phenomenal reality to its most elementary form (essentially, the trajectories of material bodies, fluids, particles, and fields) that we have been able to carry through the programme of reconstruction and computational synthesis. For the other classes of phenomena, this project has long come up against unsurmountable epistemological obstacles. At this point, it was taken as self-evident that there was an unavoidable scission between phenomenology (being as it appears to us in the perceived world and the cognitive faculties that process it) and physics (the objective being of the material world). However, we may say that it is not so much self-evident as a straightforward prejudice. In any case, this disjunction transformed the perceived world into a world of subjective-relative appearances — mental projections — with no objective content and belonging to psychology. Beyond psychology, the most that could be attributed to these appearances in the way of objectivity was a logical form of objectivity to be found in the theories of meaning and mental contents, from Bolzano and Frege, Husserl and Russell, to contemporary analytical philosophy. We may say that the current work aims to go beyond this scission by developing a mathematical neurophysics of the phenomenology of the perceived world and common sense.”[51]

Jean Petitot has argued for a “mathematical neurophysics” in order to unify the divided fields of phenomenology and physics. The fields studying perception were taken to be too subjective and were deemed “psychology” and could only achieve some objectivity within the fields of logic and analytic philosophy. Similarly, the field of physics could not explain the perceived world as was immediate to us and could only explain situations on either a macro-scale or at the most fundamental level. Petitot argues that this division between phenomenology and physics is a prejudice. Petitot constructs a theoretical computational machine which uses λ-calculus for the operation of “neural hardware” and differential geometry for the operations of “geometric software.” Petitot shows that V1 neurons have been experimentally demonstrated to construct a “cut locus” (from differential geometry) through brain imaging.

“Our last example concerns the cut locus of a figure, also called the generalized symmetry axis or ‘skeleton’. Following the psychologist Blum [26], Thom [27] always stressed its fundamental role in perception (see Fig. 2.5). Once again, imaging can show us the neural reality of the construction of this inner skeleton, for which there is no trace whatever in the sensory input, the latter consisting merely of an outer contour. Figures 2.6 and 2.7, produced by David Mumford’s disciple Tai Sing Lee, illustrate the response of a population of simple V1 neurons, whose preferred orientation is vertical, to textures with edges specified by opposing orientations. Up to around 80–100 ms, the early response involves only the local orientation of the stimulus. Between 100 and 300 ms, the response concerns the overall perceptual structure and the cut locus appears. These experiments are rather delicate to carry out, and they are much debated, but the detection of cut loci seems to be well demonstrated experimentally”

“All these examples share the fact that the geometry of the percept is constructed — Husserl would say ‘constituted’ — from sensory data which do not contain it, whence it must originate somewhere else. Put another way, they all involve subjective Gestalts. This is indeed why we chose them, because, as claimed by Jancke et al. [30], these subjective global structures ‘reveal fundamental principles of cortical processing’, the kind of principles that interest us here. The origins of visual perceptual geometry can be found in the functional architecture which implements an immanent geometry, and it is the latter that provides the focus of neurogeometry.”[52]

The geometry of perception is constructed from sensory data that does not directly exemplify this geometry (the sensory input only contains a rough contour of it).[53] Therefore the cut locus is rather created from mental Gestalts. This is an immanently experienced neurogeometry which does not appear out in the world, but is, rather, subjective. Petitot argues that this immanent geometry is the “synthetic a priori.” Kant had said that geometry is “a priori” and Petitot’s conclusions seem to verify this tenet experimentally. One need only examine the texture strip and the corresponding spatiotemporal response pictured in Fig 2.6 to see how this geometry is completely immanent to the subject.

The eliminative structuralism whereby structure is a “shorthand” for different isomorphic systems which share patterns[54]; the complementarity of ‘axiom’ and ‘theorem’; eliminative structuralism of regimented propositions that have been de-ontologized.

“Another option is to deny that structures exist at all. Talk of a given structure is just convenient shorthand for talk of all systems that are isomorphic to each other… Views like this are sometimes called eliminative structuralism, since they eschew the existence of structures altogether.”[55]

Resounding the eliminativist philosophy of mathematics, Reza Negarestani talks about the Yoneda Lemma used in category theory:

“The approaching of the concept or the local site from its adjacent environment and alternative perspectives is the gesture of the Yoneda Lemma in category theory. Yoneda Lemma is a phenomenologically trivial tool, but it nevertheless possesses a formidable power to reverse-engineer local concepts by way of their neighbourhood, by way of their outside. A point is nothing but the pointer that points to it. The actual mark is a pointer endowed with a limit, just like the mark that the tip of a pencil leaves on a piece of paper. Once the point is understood as a pointer, the concept of point can be made via an infinite recursive descent: A point is a point is a point is a point … ad infinitum. 8 Each pointer can be decomposed to a concatenation of different sets of pointers or addresses. The concept of the point is nothing but an alternating collection of gestural/perspectival pointers (arrows or morphisms). There is indeed a functionalist underside to this definition of the concept qua a local site: If what makes a thing a thing is not what a thing is but what a thing does, then we can decompose this activity or behavior (the behavior of the concept) into operative perspectives or possible activities that make the behavior of the concept in an inferential network. The study of the concept and its construction overlap, as they become part of a controlled exploratory approach.”[56]

While Negarestani seems to indicate that structure is merely shorthand for the isomorphic systems. Negarestani, as his recent work Intelligence & Spirit confirms, seems to actually be more of a functionalist than an eliminativist. One can think of the quintessential functionalist Hilary Putnum who said that mind was a Turing machine. The key import of the Yoneda limit for Negarestani is the functional decomposition of the morphisms into “the behavior of the concept” within an “inferential network”, i.e. not what the concept is, but “what a thing does”. In order to differentiate these two schools from one another (eliminative structuralism and functionalism) we will examine a more precise definition of the former:

“The eliminativist holds that mathematical statements just are (or are best interpreted as) generalizations like these and she accuses the [Sui Generis] structuralist of making too much of their surface grammar, trying to draw deep metaphysical conclusions from that. In general, any sentence Φ in the language of the arithmetic gets regimented as something like the following:

In any natural number system S, Φ[S], (Φ’)

where Φ[S] is obtained from Φ by restricting the quantifiers to the objects in S, and interpreting the non-logical terminology in terms of the relations of S…In a similar manner, the eliminative structuralist… regiments — and deflates — what seem to substantial metaphysical statements… For example, “the number 2 exists” becomes “in every natural number system S, there is an object in the 2-place of S”[57]

The closest continental thinker to this school is then Laruelle. For Laruelle, his non-philosophy is a means of suspending the transcendence (the “metaphysical conclusions” mentioned above) of any philosophical system and arriving at a transcendental which consists of the duality between ‘axiom’ and ‘theorem’. For example, if Laruelle were to Deleuze’s philosophy, this duality would be the pair of ‘actual’ and the ‘virtual’ taken together. This in effect, de-ontologizes the view from the ‘plane of immanence’ which supposedly lives in the realm of virtuality (as we discussed before, this is an idealist transcendence since Deleuze’s third synthesis is based on the Dedekind cut). Laruelle’s radical immanence is a means for performative philosophy which is in favor of material experimentation (the derived “a priori” of the unilateral duality which takes the non-philosophizable Real as its condition) as opposed to Deleuze’s plane of immanence (which has a formal a priori, namely the process of differentiation described in his fourth chapter of Difference & Repetition).

But there still remains a problem of eliminative structuralism. In order to avoid statements which are entirely vacuous such as “1+3=7”, Hellman suggests a “structuralism without structures” which goes further to posit the necessity of the existence of a mode for each statement:

“A statement such as “2+3=5” is interpreted as follows:

“Necessarily, for all relational systems M, if M is a model of the Dedekind-Peano axioms, then 2M+3M=5M.

To avoid the non-vacuity problem, he adds the following assumption:

Possibly, there exists an M such that M is a model of the Dedekind-Peano axioms.”

Superficially, Hellman’s “structuralism without structure” may sound like Laruelle’s “given-without-givenness”. But as opposed to Hellman, Laruelle’s non-philosophy is not “philosophically necessary”[58], where for Hellman, the model is possibly necessary! John Ó Maoilearca writes:

“Non-philosophy is an unknown quantity with an ‘an immanent methodology whose function for philosophy no one is in a position to evaluate as yet’. Laruelle himself expresses an allied point: there is no basis upon which non-philosophy can be commended that is itself philosophically necessary: “There is then no imperative fixing a transcendent, ontotheo-logical necessity to ‘do non-philosophy’: this is a ‘posture’ or a ‘force-(of)-thought’ which has only the criterion of immanence as its real cause — which takes itself performatively as force-(of)-thought — and the occasion of its data; which contents itself to posit axioms or hypotheses in the transcendental mode and to deduce or induce starting from them. (Laruelle 2013c, 198–9)”[59]

Laruelle then sounds actually closer to the axiomatic method that was first championed by David Hilbert. In the famous Hilbert-Frege debate, Hilbert thought axioms were schematic, while Frege thought axioms were assertoric. Frege famously uses the axiomatic method to assert the existence of God (in a letter to Hilbert):

“We imagine objects we call Gods.

Axiom 1. All Gods are omnipotent.

Axiom 2. All Gods are omnipresent.

Axiom 3. There is at least one God.”[60]

But to be fair to Hilbert, Hilbert’s system of geometry was freestanding and theology is not freestanding, let alone formalizable; geometry is formalizable.[61] One should consider the “axiomatic method” in the schematic sense (rather than assertoric), where the existence of such entities need not be necessary.

The axiomatic method; the universalist unity of mathematics from the Bourbaki approach

As the legendary Nicolas Bourbaki, a pseudonym for a collective of (mostly French) mathematicians who were (independently) extremely influential to mathematics[62], writes in their unofficial manifesto written by Dieudonne:

“It must therefore be out of the question to give to the uninitiated an exact picture of that which the mathematicians themselves can not conceive in its totality. Nevertheless it is legitimate to ask whether this exuberant proliferation makes for the development of a strongly constructed organism, acquiring ever greater cohesion and unity with its new growths, or whether it is the external manifestation of a tendency towards a progressive splintering, inherent in- the very nature of mathematics, whether the domain of mathematics is not becoming a tower of Babel, in which autonomous disciplines are being more and more widely separated from one another, not only in their aims, but also in their methods and even in their language. In other words, do we have today a mathematic or do we have several mathematics?”[63]

The division of mathematics into many different fields which were not united was a big ideological problem for Bourbaki. Bourbaki, with their treatise Éléments de mathématique, believed in a single unified mathematics that could be axiomatically constructed from sets. Bourbaki continues on to answer their own question about the unity of mathematics:

“After the more or less evident bankruptcy of the different systems, to which we have referred above, it looked, at the beginning of the present century as if the attempt had just about been abandoned to conceive of mathematics as a science characterized by a definitely specified purpose and method; instead there was a tendency to look upon mathematics as “a collection of disciplines based on particular, exactly specified concepts,” interrelated by “a thousand roads of communication,” allowing the methods of any one of these disciplines to fertilize one or more of the others [1, page 447]. Today, we believe however that the internal evolution of mathematical science has, in spite of appearance, brought about a closer unity among its different parts, so as to create something like a central nucleus that is more coherent than it has ever been. The essential aspect of this evolution has been the systematic study of the relations existing between different mathematical theories, and which has led to what is generally known as the “axiomatic method.”[64]

The axiomatic method of Bourbaki was in the tradition of David Hilbert’s axiomatic program. They wished to found a geometric system (starting from sets). But in Bourbaki’s treatise, there is no mention of Godel’s incompleteness theorem. Also, the usual critique levied against the Bourbaki school of mathematics is that they did not even include ‘category theory’ in their axiomatic system. Contrary to many claims that Bourbaki’s axiomatic system is dated, a paper in 2014 by Maribel Anacona, Luis Carlos Arboleda & F. Javier Pérez-Fernández states:

“[W]e study the axiomatic system proposed by Bourbaki for the Theory of Sets in the Éléments de Mathématique. We begin by examining the role played by the sign τ in the framework of its formal logical theory and then we show that the system of axioms for set theory is equivalent to Zermelo–Fraenkel system with the axiom of choice but without the axiom of foundation. Moreover, we study Grothendieck’s proposal of adding to Bourbaki’s system the axiom of universes for the purpose of considering the theory of categories. In this regard, we make some historical and epistemological remarks that could explain the conservative attitude of the Group.”[65]

According to Maribel Anacona, Luis Carlos Arboleda & F. Javier Pérez-Fernández, Bourbaki’s system is equivalent to ZFC set theory (for the most part). Also Grothendieck had proposed his universes (which are used in category theory/topos theory) in a Bourbaki paper![66] As an alternative to set theory, we will discuss category theory as a system for philosophical structuralism.

Non-Euclidean geometry; the multiplicity of ‘case(s)’ through general ‘invariants’ (topological, group-theoretic or categorical) which constrain the objects into varying algebraic structures; Category theory as a philosophical structuralism

The 5th axiom of Euclid is equivalent to the statement that parallel lines do not intersect.[67] The first mathematicians to provide a fully mathematical system of non-Euclidean geometry, which suspends the parallel postulate, were the mathematicians János Bolyai and Nicolai Ivanovich Lobachevskii. Non-Euclidean geometry is completely equiconsistent to Euclidean geometry.[68] After these advances, the philosophy of geometry was later taken up by the mathematician Henri Poincare:

“[Poincare’s] concern in his “On the foundations of geometry” (1898) was with epistemology.

Poincaré argued that the mind quickly realises that it can compensate for certain kinds of motions that it sees. If a glass comes towards you, you can walk backwards in such a way that the glass seems unaltered. You can do the same if it tilts or rotates. The mind comes to contain a store of these compensating motions, and it realises that it can follow one with another and the result will be a third compensating motion. These mental acts form a mathematical object called a group. However, the mind cannot generate compensating motions for other motions it sees, such as the motion of the wine in the glass as it swirls around. In this way the mind comes to form the concept of a rigid body motion, that being precisely the motion for which the mind can form a compensating motion.

Poincaré then considered what group the group of compensating motions could be, and found that, as Helmholtz had suggested and Lie had then proved, there was a strictly limited collection of such groups. Chief among them were the groups that come from Euclidean and non-Euclidean geometry, and as abstract groups they are different. But which one was correct?

Poincaré’s controversial view was that one could never know. Human beings, through evolution and through our experience as infants, pick the Euclidean group and so say that space is Euclidean. But another species, drawing on different experiences, could pick the non-Euclidean group and so say that space was non-Euclidean. If we met such a species, there would be no experiment that would decide the issue…This twist on the Kantian doctrine of the unknowability of the Ding an sich (the thing in itself) and our confinement to the world of appearances, was congenial to Poincaré as a working physicist, but there is an important distinction to make. The viewpoint just explained is Poincaré’s philosophy of geometrical conventionalism. He advocated conventionalism in other areas of science, arguing that what we call the laws of nature (Newton’s laws, the conservation of energy, and so forth) were neither empirical matters open to revision nor absolute truths but were well established results that had been elevated to the role of axioms in present scientific theories. They could be challenged, but only if a whole scientific theory was being challenged, not idly when some awkward observations were made. Faced with a satellite that did not seem to be obeying Newton’s laws one should, said Poincaré, consider some as-yet unnoticed force at work and not seek to re-write Newton. But a new theory can be proposed, based on different assumptions that rewrite a law of nature, because these laws are not eternal truths — we could never know such things. And if a new theory were to be proposed, one can only choose between the new and the old on grounds of convenience.”[69]

Poincare was chiefly concerned with epistemology. In his philosophy of geometry, there is no way to know if Euclidean or non-Euclidean geometry is the fundamental explanation of space. He believed that we were conditioned to see the world in the Euclidean way. This equiconsistency at, not only the mathematical level, but also the epistemological level, should lead us to consider the fundamental import of Felix Klein’s Erlangen Programme. Klein’s programme for geometry was to first unify all geometries through projective geometry. But secondly and more importantly, “Klein proposed that group theory, a branch of mathematics that uses algebraic methods to abstract the idea of symmetry, was the most useful way of organizing geometrical knowledge; at the time it had already been introduced into the theory of equations in the form of Galois theory.”[70] The only formula that proves to be invariant in each respective space (through Mobius transformation) within the respective geometries of hyperbolic, Euclidean or spherical is “cross ratio”. Foote has shown a formula that is common to each geometry.[71] The cross ratio is invariant under Mobius transformations. Klein’s programme can be seen as a method of instanting different geometries through the use of different groups. This conceptually can be thought of as the multiplicity of ‘case(s)’ whereby general ‘invariants’ (group-theoretic) constrain the objects into varying geometries. The Gaussian curvature is another example by which an invariant determines and attenuates a surface into varying, constrained geometries (see picture).

Even, the question of the shape of the universe can be considered according to various curvatures. The formalism of algebra (group theory) in geometry/topology has been tremendously fruitful in the fields of algebraic topology and algebraic geometry. Following Gauss’ Theorema Egregium, this method provides a means by which the intrinsic invariants of material are central (de-ontologized metaphysics) as opposed to the extrinsic objectivation of the thing in an embedded space (pre-ontology). But as Petitot has pointed out: does embedded space exist without neural networks? Moreover, according to Poincare’s epistemology, we can think of the axiomatic method as a conventionalist approach which need not state anything ontological. But there is something to be said of the “schematic” (as opposed to assertoric) approach to axioms, which Hilbert espoused, for autonomous formal systems.

Saunders Mac Lane and S. Eilenberg, the inventors of category theory, wrote when they introduced the notion of the ‘category’ in their seminal paper: “This may be regarded as a continuation of the Klein Erlanger Program, in the sense that a geometrical space with its group of transformations is generalized to a category with its algebra of mappings”[72] Category theory was thus thought of as in the same tradition as Klein’s program! Because of its abstraction of specific algebraic structures to categories of objects which relate to one another through morphisms, category theory can be thought of as a philosophical structuralism:

“Two facets of the nature of mathematical objects within a categorical framework have to be emphasized. First, objects are always given in a category. An object exists in and depends upon an ambient category. Furthermore, an object is characterized by the morphisms going in it and/or the morphisms coming out of it. Second, objects are always characterized up to isomorphism (in the best cases, up to a unique isomorphism). There is no such thing, for instance, as the natural numbers. However, it can be argued that there is such a thing as the concept of natural numbers. Indeed, the concept of natural numbers can be given unambiguously, via the Dedekind-Peano-Lawvere axioms, but what this concept refers to in specific cases depends on the context in which it is interpreted, e.g., the category of sets or a topos of sheaves over a topological space. Thus, it seems that sense does not determine reference in a categorical context. It is hard to resist the temptation to think that category theory embodies a form of structuralism, that it describes mathematical objects as structures since the latter, presumably, are always characterized up to isomorphism. Thus, the key here has to do with the kind of criterion of identity at work within a categorical framework and how it resembles any criterion given for objects which are thought of as forms in general. One of the standard objections presented against this view is that if objects are thought of as structures and only as abstract structures, meaning here that they are separated from any specific or concrete representation, then it is impossible to locate them within the mathematical universe. (See Hellman 2003 for a standard formulation of the objection, McLarty 1993, Awodey 2004, Landry & Marquis 2005, Shapiro 2005, Landry 2011, Linnebo & Pettigrew 2011, Hellman 2011, Shapiro 2011, McLarty 2011, Logan 2015 for relevant material on the issue.)”

Category theory can be thought of as a means to instantiate varying algebraic structures through either the category of sets, the category of groups, the category of rings, the category of topological spaces, etc. The only thing that is “positive” is the morphisms which relate objects within these categories, which can be thought of as the structural relations between objects. The interiors of the objects are not necessary! “What is remarkable about this apparatus is that the vast bulk of structures arising in mathematics can be captured in the spare language of ‘arrows only,’ without ‘looking inside’ the objects (which, recall, are typically whole structures or spaces e.g. algebraic, metric, topological, etc.).”[73] Typical set membership relationships need not be exemplified. Everything can be viewed from “arrows only” (morphisms) as is typically shown in commutative diagrams. But the problem with such generality is “it is impossible to locate [objects as specific or concrete structures] within the mathematical universe”. The solution to this dilemma is “type theory”:

“A slightly different way to make sense of the situation is to think of mathematical objects as types for which there are tokens given in different contexts. This is strikingly different from the situation one finds in set theory, in which mathematical objects are defined uniquely and their reference is given directly. Although one can make room for types within set theory via equivalence classes or isomorphism types in general, the basic criterion of identity within that framework is given by the axiom of extensionality and thus, ultimately, reference is made to specific sets. Furthermore, it can be argued that the relation between a type and its token is not represented adequately by the membership relation. A token does not belong to a type, it is not an element of a type, but rather it is an instance of it. In a categorical framework, one always refers to a token of a type, and what the theory characterizes directly is the type, not the tokens. In this framework, one does not have to locate a type, but tokens of it are, at least in mathematics, epistemologically required. This is simply the reflection of the interaction between the abstract and the concrete in the epistemological sense (and not the ontological sense of these latter expressions.) (See Ellerman 1988, Ellerman 2017, Marquis 2000, Marquis 2006, Marquis 2013.)”[74]

This top-down approach of category theory is explained by Awodey in his response to Hellman. We can think of it as the impartiality of abstraction through invoking invariants (similar to Klein’s program) in a bird’s eye view. Awodey defends “mathematical structuralism”. N.B. we will not yet comment upon the ontology of such a structuralism (we will address that very complicated issue in the next section), but we note that structuralism, by way of its “schematic” nature, does not require the existence of some universe of, for example, all possible rings. It requires, say, only one ring. In this sense, it has no background ontology. Let’s take Awodey’s example:

“(i) in any ring, if x² = — 1 then x⁵ = x, and

(ii) the complex numbers are by definition a ring with an element i such

that i² = -1, and having a couple of other distinctive properties.

The foundationalist may now object that he, too, can show by the same

simple proof that:

(i’) any ring in his universe with an element such that x² = -1 also has x⁵ = x, and

(ii’) his complex numbers are a ring in which i is a root of -1”[75]

Awodey (responding to Hellman) argues that category theory can provide a “philosophical structuralism.” The question everyone should be asking is whether “category theory is a framework for philosophical structuralism”. A “mathematical structuralism” proper is obvious. The point made by Awodey is that category theory favors a top-down approach over the bottom-up foundationalist approach. A set theorist needs to define all of the necessary structures and axioms in order to do the usual mathematics, while the category theorist need not make a “universal quantification” over the universe of all possible structures. The category theorist can define the objects through morphisms and rather objects are an instance or type. For example, in order to define i²=-1, the category theorist can say “given any ring” as opposed to defining the universe of all possible ring structures and fixing some given structure, such as the complex numbers. Awodey makes the argument that the category theorist is making a “schematic statement about structure”. This can be generalized to structures that are “formally similar” but not identical (the key to category theories premise of “up to isomorphism”) such that you have some set X such as X² + X + 1 =X, this is even general to any semi-ring since it need not require additive inverse. But “schematic” statements are not about “universal quantification” over some universe of, say, all possible semirings, or even not about some “universal quantification” (which usually requires the quantification to be restricted over a “fixed domain”). He uses the example of a fixed quantification of a value, such as for ALL REAL numbers x² + 1 is a real number VERSUS the indeterminate case, such as for a polynomial ring R[x] with real coefficients where it can be proved that there does not exist a polynomial p(x) such that p(x)² = x² + 1. “In the case of the quantified statement, we consider what is true for an arbitrary real number, while in the indeterminate case we, in effect, consider what holds for an arbitrary number x, in an arbitrary ring over R. One could say, rather speculatively, that the difference in both cases seems to be related to that between what is true for all of a fixed range of values, as opposed to what can be proved for an indeterminate value.”[76] So when a theorem invokes “a finitely generated abelian group” it does not specify the elements of the group, so the statement is schematic in the sense what the structures “consist of” are “undetermined”. As Awodey suggests, if one makes a statement relating a topological space such as S¹ to its covering space, R, and defining the fundamental group through the fiber of S¹, one is not necessarily defining what points are in the topological space, one is relating a group Z to the cover R based on an isomorphism between the index of the fiber and the integers (there are Z elements stacked on top of each other). Awodey isn’t arguing that category theory can provide an autonomous alternative to set theory for the foundations of mathematics. He doesn’t think there will be some “true topos,” in which “all of mathematics” can be done. Rather he thinks category theory is the best to approach the modern approach of mathematics after Galois et al. “In category theory, such notions as relation, connection, property, and operation are all subsumed under the primitive notion of a morphism.” He rather thinks there needs to be a different interpretation of categorical mathematics, which after homotopy type theory, is the top-down approach. Awodey says the top-down approach need not have to define where higher and higher levels come from: for example if one has some group G and then a category C and then one has Group(C) which is the category of all groups in that category. Everything is an instance or type and need not be presupposed to exist, what matters is the relations between objects (as Jean-Pierre Serre said). “Thus rather than saying, for example, “now suppose this particular solar system is an atom in some huge piece of matter in an enormous solar system”, one is instead saying “now suppose this particular configuration of bodies occurs, not as a solar system, but as an atom in some piece of matter in a solar system”. The former assumption indeed requires additional (outrageous) existence assumptions, while the latter requires none. A configuration is only assumed as a structure from the start, and so it can be specialized by assuming it to occur in more special situations. The schematic character of statements about structures is clearly essential to this approach; whatever we were saying about C originally (e.g. that there is a group in it), can still be said about C after we have put it into some ambient category S, because we weren’t assuming anything particular about it in the first place.”[77]

Topos theory; Homotopy Type Theory as a foundation for mathematics; the Tarski axiom and Grothendieck’s universes

“Lawvere from early on promoted the idea that a category of categories could be used as a foundational framework. (See Lawvere 1964, 1966.) This proposal now rests in part on the development of higher-dimensional categories, also called weak n-categories. (See, for instance Makkai 1998.) The advent of topos theory in the seventies brought new possibilities. Mac Lane has suggested that certain toposes be considered as a genuine foundation for mathematics. (See Mac Lane 1986.) Lambek proposed the so-called free topos as the best possible framework, in the sense that mathematicians with different philosophical outlooks might nonetheless agree to adopt it. (See Couture & Lambek 1991, 1992, Lambek 1994.) He has also argued that there is no topos that can thoroughly satisfy a classical mathematician. (See Lambek 2004.) (For more on the various foundational views among category theorists, see Landry & Marquis 2005.)

This matter is further complicated by the fact that the foundations of category theory itself have yet to be clarified. For there may be many different ways to think of a universe of higher-dimensional categories as a foundations for mathematics. It is safe to say that we now have a good understanding of what are called (∞,1)-categories and important mathematical results have been obtained in that framework. (See, for instance, Cisinski 2019 for a presentation.) An adequate language for the universe of arbitrary higher-dimensional categories still has to be presented together with definite axioms for mathematics. (See Makkai 1998 for a short description of such a language. A different approach based on homotopy theory but with closed connections with higher-dimensional categories has been proposed by Voevodsky et al. and is being vigorously pursued. See the book Homotopy Type Theory, by Awodey et al. 2013.)”[78]

There have been many different theories (which are all related) which have been proposed as a foundation for mathematics outside of ZFC: the category of categories, topos theory, homotopy type theory. We will in this section discuss the ontological commitments of topos theory. We note that homotopy type theory as Ladyman has said provides a foundation for mathematics.[79] As Awodey, a leading structuralist, writes: “The recent discovery of an interpretation of constructive type theory into abstract homotopy theory suggests a new approach to the foundations of mathematics with intrinsic geometric content and a computational implementation. Voevodsky has proposed such a program, including a new axiom with both geometric and logical significance: the Univalence Axiom. It captures the familiar aspect of informal mathematical practice according to which one can identify isomorphic objects.” Homotopy Type theory is completely intensional in the manner of functional programming languages (such as Clojure) which make use of lambdas. What are the ontological commitments of topos theory? Moreover, if ZFC set theory can not be proved to be consistent within the system of ZFC itself, what are the alternatives for a formal system?

“The consistency of ZFC does follow from the existence of a weakly inaccessible cardinal, which is unprovable in ZFC if ZFC is consistent…If consistent, ZFC cannot prove the existence of the inaccessible cardinals that category theory requires. Huge sets of this nature are possible if ZF is augmented with Tarski’s axiom.”[80]

“Tarski–Grothendieck set theory (TG, named after mathematicians Alfred Tarski and Alexander Grothendieck) is an axiomatic set theory. It is a non-conservative extension of Zermelo–Fraenkel set theory (ZFC) and is distinguished from other axiomatic set theories by the inclusion of Tarski’s axiom, which states that for each set there is a Grothendieck universe it belongs to (see below). Tarski’s axiom implies the existence of inaccessible cardinals, providing a richer ontology than that of conventional set theories such as ZFC. For example, adding this axiom supports category theory.”[81]

By way of the Tarski axiom, one is able to prove the consistency of ZFC. The Tarski axiom requires the existence of a Grothendieck universe for each set. But the Tarski axiom requires an inaccessible cardinal (i.e. size of infinity, Cantor was the first to formalize cardinality and countability) which is not provable within ZFC if ZFC is consistent. This axiom by way of the existence of such an uncountable cardinality, therefore makes an ontological commitment. Andrew Wiles, who proved Fermat’s Last Theorem, makes use of much of Grothendieck’s mathematics in his proof. I am not at a technical level to study his proof, but the question remains, does Wiles’ work, which makes use of etale cohomology require these uncountable universes? To answer this question, I defer to an expert:

“The basic contention here is that Wiles’ work uses cohomology of sheaves on certain Grothendieck topologies, the general theory of which was first developed in Grothendieck’s SGAIV and which requires the existence of an uncountable Grothendieck universe. It has since been clarified that the existence of such a thing is equivalent to the existence of an inaccessible cardinal, and that this existence — or even the consistency of the existence of an inaccessible cardinal! — cannot be proved from ZFC alone, assuming that ZFC is consistent. Many working arithmetic and algebraic geometers however take it as an article of faith that in any use of Grothendieck cohomology theories to solve a “reasonable problem”, the appeal to the universe axiom can be bypassed. Doubtless this faith is predicated on abetted by the fact that most arithmetic and algebraic geometers (present company included) are not really conversant or willing to wade into the intricacies of set theory….” And read this: “As a result, in order to figure out if Wiles’s proof uses universes, or whether it’s relatively easy to avoid them, you’d need to either read the proofs yourself or find someone who was both deeply familiar with the details of the proof, and someone who cares a lot about details. One person who comes quickly to mind is BCnrd [Brian Conrad]. BCnrd was one of the mathematicians who proved the Modularity Theorem, which showed that all elliptic curves over QQ are modular. This is a strengthening of Taylor and Wiles’ result, which applied only to semi-stable elliptic curves, and the proof involved understanding and building on Taylor and Wiles’ work. BCnrd is also famous for his attention to detail and for consulting underlying foundational sources; he is the author of a book dedicated to simplifying and correcting the presentation of Grothendieck duality in Hartshorne’s book Residues and Duality. As explained in the comments to Pete’s answer, BCnrd says there’s really no issue at all in Wiles’s proof. All of the specific things that Wiles uses stay far away from any of the difficult issues where you might be worried about needing to invoke universes.”[82]

Brian Conrad, a leading algebraic geometer, says that Grothendieck universes can be and were avoided in Wiles’ proof of Fermat’s Last Theorem. Therefore, the ontological commitments of Tarski’s Axiom can be bypassed. Should we accept such an ontology for topos theory (as was practiced by Grothendieck)? These questions are ultimately existence questions. But even category theory itself does not have the power to even defend the existence of its own categories!

“The second argument against the autonomy of categorical foundations has been called the “mismatch objection”. It concerns the general status of category theory or topos theory; and it is based on the distinction between two ways of understanding mathematical axioms, namely as “structural”, “algebraic”, “schematic” or “Hilbertian”, on the one hand, and as “assertoric” or “Fregean”, on the other hand. As Hellman argues, foundational systems such as classical set theory need to be assertoric in character, in the sense that their axioms describe a comprehensive universe of objects used for the codification of other mathematical structures. Zermelo-Fraenkel set theory is an assertoric, “contentual” theory in this sense. Its axioms (e.g., the power set axiom or the axiom of choice) make general existence claims regarding the objects in the universe of sets.

In contrast, category theory represents a branch of abstract algebra, as its origin reveals. Thus it is, by its very nature, non-assertoric in character; it lacks existence axioms conceived as truths about an intended universe. For example, the Eilenberg-Mac Lane axioms of category theory are not “basic truths simpliciter”, but “schematic” or “structural” in character. They function as implicit definitions of algebraic structures, similar to the way in which the axioms of group theory or ring theory are “defining conditions on types of structures”. This point is related to another argument against the autonomy of category theory that Hellmann calls the “problem of the ‘home address’: where do categories come from and where do they live?” (2003: 136). Given the “algebraic-structuralist perspective” underlying category theory and general topos theory, its axioms make no assertions that particular categories or topoi actually exist. Classical set theories, such as ZFC with its strong existence axioms, have to step in again in order to secure the existence of such objects.”[83]

We can define axiomatically what a group or ring is by means of abstract algebra, but can we prove that these structures exist? Category theory as a foundational system requires both axioms that are “algebraic axioms” (such as abstract algebra) but also ones that “consist of positive assertions concerning the existence and inter-relations of categories and topoi.”[84] Awodey describes examples of the former axioms but says nothing about the latter.[85] Is Homotopy Type Theory the solution? We leave the true answer to that question to the philosophers of mathematics.

Arithmetic becomes geometric (Alain Connes & the Natural Numbers as a sheaf over a topos)

Alain Connes has said that even a child could treat the natural numbers as a topos, where one replaces the addition of two numbers with the supremum and the product of two numbers with their sum.[86] This is known as a tropical semiring (from the field of tropical geometry). Connes says the tropical semiring could be viewed as a topos:

“We show that the non-commutative geometric approach to the Riemann zeta function has an algebraic geometric incarnation: the “Arithmetic Site”. This site involves the tropical semiring N¯ viewed as a sheaf on the topos N̂× dual to the multiplicative semigroup of positive integers.”.[87]

Zalamea has explained the significance of the work of Grothendieck toposes, “Grothendieck toposes unify deep insights on arithmetic (number) and geometry (form).”[88] Topos is a completely geometric object, where arithmetic is logical (such as for example, Peano arithmetic, which is second-order). Topos provides a radical way of seeing geometry as logic and logic as geometry:

“The history of category theory offers a rich source of information to explore and take into account for an historically sensitive epistemology of mathematics. It is hard to imagine, for instance, how algebraic geometry and algebraic topology could have become what they are now without categorical tools. (See, for instance, Carter 2008, Corfield 2003, Krömer 2007, Marquis 2009, McLarty 1994, McLarty 2006.) Category theory has led to reconceptualizations of various areas of mathematics based on purely abstract foundations. Moreover, when developed in a categorical framework, traditional boundaries between disciplines are shattered and reconfigured; to mention but one important example, topos theory provides a direct bridge between algebraic geometry and logic, to the point where certain results in algebraic geometry are directly translated into logic and vice versa. Certain concepts that were geometrical in origin are more clearly seen as logical (for example, the notion of coherent topos). Algebraic topology also lurks in the background. (See, for instance, Caramello 2018 for a systematic exploitation of the idea of toposes as bridges in mathematics.) On a different but important front, it can be argued that the distinction between mathematics and metamathematics cannot be articulated in the way it has been. All these issues have to be reconsidered and reevaluated.”[89]

For an exposition of the relationship between logic and geometry and how algebraic geometry creates this bridge, we defer to the experts who wrote the book Sheaves in Geometry and Logic: A First Introduction to Topos Theory.[90] As the mathematician and philosopher Rene Thom writes in his book Semiophysics:

“We will not be looking for the foundation of geometry in logic. On the contrary, we shall be seeing logic as a derived activity…a rhetoric. We shall be trying, not so much to convince, as to suggest representations and to extend the intelligibility of our world. Instead of building geometry in a logical manner, we will seek to base what is logical on geometry.”

The aim of this treatise was to follow Thom’s philosophical program of geometry as the foundation (instead of logic). Recent work in Homotopy Type Theory, which is based upon topological notions, would most likely confirm this philosophy. We conclude with a postscript critiquing the logic of object-oriented programming and economics.

Postscript: the Axiom of Choice/Zorn’s Lemma in relation to the Banach-Tarski Paradox and the ramifications for the social sciences

“The full strength of the Axiom of Choice does not seem to be needed for applied mathematics. Some weaker principle such as CC [Countable Choice] or DC [Dependent Choice] generally would suffice. To see this, consider that any application is based on measurements, but humans can only make finitely many measurements. We can extrapolate and take limits, but usually those limits are sequential, so even in theory we cannot make use of more than countably many measurements. The resulting spaces are separable. Even if we use a nonseparable space such as L^∞, this may be merely to simplify our notation; the relevant action may all be happening in some separable subspace, which we could identify with just a bit more effort. (Thus, in some sense, nonseparable spaces exist only in the imagination of mathematicians.) If we restrict our attention to separable spaces, then much of conventional analysis still works with AC replaced by CC or DC. However, the resulting exposition is then more complicated, and so this route is only followed by a few mathematicians who have strong philosophical leanings against AC.

A few pure mathematicians and many applied mathematicians (including, e.g., some mathematical physicists) are uncomfortable with the Axiom of Choice. Although AC simplifies some parts of mathematics, it also yields some results that are unrelated to, or perhaps even contrary to, everyday “ordinary” experience; it implies the existence of some rather bizarre, counterintuitive objects. Perhaps the most bizarre is the Banach-Tarski Paradox: It is possible to take the 3-dimensional closed unit ball,

B = {(x,y,z) ∈ R³ : x² + y² + z² < 1}

and partition it into finitely many pieces, and move those pieces in rigid motions (i.e., rotations and translations, with pieces permitted to move through one another) and reassemble them to form two copies of B.

At first glance, the Banach-Tarski result seems to contradict some of our intuition about physics — e.g., the Law of Conservation of Mass, from classical Newtonian physics. If we assume that the ball has a uniform density, then the Banach-Tarski Paradox seems to say that we can disassemble a one-kilogram ball into pieces and rearrange them to get two one-kilogram balls. But actually, the contradiction can be explained away: Only a set with a defined volume can have a defined mass. A “volume” can be defined for many subsets of R³ — — spheres, cubes, cones, icosahedrons, etc. — — and in fact a “volume” can be defined for nearly any subset of R³ that we can think of. This leads beginners to expect that the notion of “volume” is applicable to every subset of R³. But it’s not. In particular, the pieces in the Banach-Tarski decomposition are sets whose volumes cannot be defined.”[91]

The Axiom of Choice is controversial to some mathematicians. It states that it is possible to have a “choice function,” which chooses an element from a collection each time, for an infinite amount of collections. The Axiom of Choice leads to the Banach-Tarski paradox, which roughly says one ball can be decomposed and reassembled into two balls, which seems to contradict the Law of Conservation of Mass. But what is important to note is, these sets do not have volume, so it does not lead to any contradiction in physics since these sets are not Lebesgue measurable. The axiom of choice can be replaced by less controversial cases such as countable choice or dependent choice. Most physicists deal with cases that require only finitely many measurements. But Stephen Hawking had even once invoked the Axiom of Choice to prove a theorem.[92]

The Axiom of Choice is equivalent to Zorn’s lemma. Zorn’s lemma states “suppose a partially ordered set P has the property that every chain in P has an upper bound in P. Then the set P contains at least one maximal element.” This amounts to the existence of a maximal element for a subset which is totally ordered. In this section we will discuss the problems for assuming Zorn’s lemma (=AoC) in cybernetics and economics. Some of the consequences of AoC include: “The Axiom of Choice implies every vector space has a basis”. For the proof, see footnote.[93] For the case of Zorn’s Lemma implying every vector space has a basis, refer to the press release for the Hermetic Definition art show which has the proof.[94]Standard economics is premised on the notion of fungibility (what Marx has called exchange-value). But we are not throwing the baby out with the bathwater, purely on the existence of a basis. Surely, a Linear Algebra student would say the existence of a basis for a vector space is not much to assume. Oswald Wiener has criticized the cybernetic paradigm we live after the research of Norbert Wiener, going so far to say, “science and barbarism go very well together“:

“Because if one thing is clear, it’s that science and barbarism go very well together. The scientists needn’t necessarily be barbarians themselves, they need only shut their eyes or want to live nice lives or have a voice within the structures of power. The stagnation isn’t just in the social or aesthetic fields, but is also present in the natural sciences. In physics there has been no new idea that has really pushed things forwards for a hundred years. This is perhaps putting it very crudely, but thousands of the best researchers would cut off their own right arm to have an idea that amounted to a comparable scientific advance as quantum physics a hundred years ago. All kinds of things are going on, but, in the best cybernetic tradition, the simulation apparatuses are reconciling the contradictions between the theories of physics — it’s known as a “handshake” — and innovation is being thwarted. Many of the simulation techniques used by physics are successful: weather forecasting, for example, has become a lot better, but they cannot explain why. The method is to calculate on different levels that are incompatible with each other. These different levels are then glued together by a form of creative accounting. The glue they use is the same cybernetic kind that I make fun of in the improvement of central europe. It’s a pure invention, a fiction, but nevertheless it works.”

What exactly are these contradictions which are presupposed by cybernetics? The Axiom of Choice supposes a choice function exists to choose an element, an infinite amount of times. This can be thought of as a decision. Because of the halting problem, there is a limit to when a computer can halt by itself and can instead remain in a loop. In my stage manifesto[95] I articulated the basis for which contemporary artists live in a paradigm of “image economy”, this is a completely codified language of visuality, the image-object of desire. This is a means of organizing perception according to codification. Similarly economics supposed that the topology exists on such notions as the “utility of a product” or even a person’s performance (KPI). But does this convergence exist for all objects in the space of econometrics? Do we presuppose that infinite profit is possible? The heart of the matter is Simondon’s critique of Norbert Wiener:

“Simondon criticized Norbert Wiener’s theory of cybernetics, arguing that “Right from the start, Cybernetics has accepted what all theory of technology must refuse: a classification of technological objects conducted by means of established criteria and following genera and species.” Simondon aimed to overcome the shortcomings of cybernetics by developing a “general phenomenology” of machines.”[96]

We wish to critique the object oriented epistemology of genus, species and replace it instead with a Simondonian/Thomian/Piagetian “genetic epistemology,” which has no individuated atom, “the individual,” as its metaphysics. Piaget’s developmental psychology was premised on the cognitive invariants of child development, these principles had a topological basis (conceiving that a cup is half empty). My problem with cybernetics is on a philosophical basis for its creation of virtual objects which can be compared and sorted against one another (think of the website ‘Hot or Not’). “Total orders”, if Zorn’s lemma is assumed, create hierarchy where there is a maximal element for the objects in the computer program. In finite cases, Zorn’s lemma is unnecessary, but in infinite cases, a spanning tree does not necessarily exist.

In the Java language, which is object-oriented:

“there are several existing methods that already sort objects from any class like Collections.sort(List<T> list). However, Java needs to know the comparison rules between two objects. So when you define a new class and want the objects of your class to be sortable, you have to implement the Comparable and redefine the compareTo(Object obj) method.”[97]

“Sometimes we may want to change the ordering of a collection of objects from the same class. We may want to order descending or ascending order. We may want to sort by name or by address.

We need to create a class for each way of ordering. It has to implement the Comparator interface.

Since Java 5.0, the Comparator interface is generic; that means when you implement it, you can specify what type of objects your comparator can compare.”[98]

In the Java programming language, one needs to use an interface such as Comparable or Comparator in order to sort objects. From the Java website:

“public interface Comparator<T>

A comparison function, which imposes a total ordering on some collection of objects. Comparators can be passed to a sort method (such as Collections.sort or Arrays.sort) to allow precise control over the sort order. Comparators can also be used to control the order of certain data structures (such as sorted sets or sorted maps), or to provide an ordering for collections of objects that don’t have a natural ordering.

The ordering imposed by a comparator c on a set of elements S is said to be consistent with equals if and only if, e2)==0 has the same boolean value as e1.equals(e2) for every e1 and e2 in S.

For the mathematically inclined, the relation that defines the imposed ordering that a given comparator c imposes on a given set of objects S is:

{(x, y) such that, y) <= 0}.

The quotient for this total order is:

{(x, y) such that, y) == 0}.

It follows immediately from the contract for compare that the quotient is an equivalence relation on S, and that the imposed ordering is a total order on S. When we say that the ordering imposed by c on S is consistent with equals, we mean that the quotient for the ordering is the equivalence relation defined by the objects’ equals(Object) method(s):

{(x, y) such that x.equals(y)}.

Unlike Comparable, a comparator may optionally permit comparison of null arguments, while maintaining the requirements for an equivalence relation.

This interface is a member of the Java Collections Framework.”[99]

So the Comparator imposes a total order on the objects. In the case of finite elements, a maximal element can easily be identified. We are interested in the case of infinite elements where Zorn’s lemma (AoC) needs to be invoked. One can refer to the mathematical definition of a “total order,” which one would learn about in a Discrete Mathematics class.[100] The problem with object-oriented programming is thus the case in which a computer needs to sort an infinite amount of objects. The Axiom of Dependent Choice can be invoked for a countable (infinite) chain.[101] But for the case of uncountable chains, Zorn’s lemma is needed. If one were to have virtual objects that were as uncountable as the real number line, one would need to use Zorn’s lemma:

“The set of Java programs (or the set of programs in any language) is countably infinite, because you can list them according to the number of characters they contain.

But since there are an uncountably infinite number of functions from N to N, then there are some functions which cannot be written in Java, or any programming language. Essentially, this means there are some functions which cannot be computed!

Not to be confusing, but there are an uncountably infinite number of functions from N to N that cannot be computed, so there are ‘more’ functions that cannot be computed than ones that can be.”[102]

The Riemann sphere in non-Euclidean geometry makes use of 1-point compactification, where the infinity is compactified into a point. These spaces are compact. Do economics and cybernetics suppose “infinite wealth” and use models that require “compact spaces”? Rene Thom addresses the question of “infinity” (in aesthetics):

“The problem of aesthetics is a difficult one. I’ve written about it somewhat, but I must confess that the elaboration of a satisfactory theory would be extremely difficult. I’ve the impression that at the root of “the aesthetic” one finds “the sacred”. What is “the sacred”? It was this question that led me to my theory of pregnances and saliences. The original idea is that all behavior, starting with that of animals, is controlled by the fact that when the animal perceives a form in its presence, reactions of attraction and repulsion are released with regard to that form, whether they be visual, auditory, olfactory, and so on. In even the most rudimentary cases, one finds these reactions of attraction and repulsion. I believe that the sense of the sacred in human beings is characterized by the fact that this axis of attraction/ repulsion can, in some sense, become self-referencing through being compactified by a point at infinity. This point at infinity is precisely what we call the sacred. Stated differently, a sense of the sacred is aroused every time we find ourselves in the presence of a form which appears to be endowed with infinite power, and which is simultaneously attractive and repulsive. As these two infinities are in opposition, one becomes immobilized relative to this form: Its fascination causes motion to cease. Because such a situation is intolerable for very long, certain accommodations emerge, which relax this paralysis through the phenomenon of sacralization.” [103]

We can ask if economics and cybernetics suppose this “point at infinity” in order to create a value system upon which objects can be sorted. Rene Thom is generalizing this “point at infinity” to questions of metaphysics, turning 1-point compactification into a question about aesthetics, “attractions and repulsions” based upon the ideal point which is sacred, we could call it godly. This theory of salience vs pregnance is defined in his book Semiophysics, where he makes the category of pregnances rigorous. My aesthetic attraction or repulsion is based on some ideal form of beauty. The method of 1-point compactification is used in the Riemann Sphere. As discussed before, the cross ratio is invariant under the actions of Mobius transformations on the Riemann Sphere (non-Euclidean geometry). Most non-Euclidean models suppose some absolute boundary which represents “infinity.”

If one were to follow Diedrich Diederichsen in Kai Kein Respekt, an artist could hypothesize a homofuturism/homosociality that is an anti-systematic transcendental without the “given” or “thesis” based on a domestic phenomenology of friendship or “twin speak”. A non-communication based on intimacy (“soulmates”?). Deleuze had said about control societies, “Maybe speech and communication have been corrupted. They’re thoroughly per­meated by money — and not by accident but by their very nature. We’ve got to hijack speech. Creating has always been something dif­ferent from communicating. The key thing may be to create vacuoles of noncommunication, circuit breakers, so we can elude control.”[104] Deleuze resounds Klossowski’s Living Currency, which is a psychic individuation (pulsion) or Bataille’s Story of the Eye, which lays out a currency of the sexual. For Klossowski, in his Sade, My Neighbor, there is no Godhead (“headless”) in Republican society. Therefore these questions about the sacred or divine beauty are meaningless. But what is the real material correlate before money and commoditization of the thing, before currency? The best solution to this question of the ineffable Real is to be found in the aesthetic work of Gerald Donald, Dieter Roth, Theo Parrish, Robert Filliou, Hermann Nitsch, Dominik Steiger, Dorothy Iannone, Andre Thomkins, Isa Genzken and Jutta Koether. For example, listen to Parrish’s “Soul control” or “Command your soul” or Donald’s Dopplereffekt or ARPANET.

[1] “We show that the non-commutative geometric approach to the Riemann zeta function has an algebraic geometric incarnation: the “Arithmetic Site”. This site involves the tropical semiring N¯ viewed as a sheaf on the topos N̂× dual to the multiplicative semigroup of positive integers.”,

[2] Mathematical Structuralism, pg. 2 (Geoffrey Hellman and Stewart Shapiro)


[4] “No, atoms are not entities in the way we use that term in colloquial or traditional philosophical sense. I highly suggest the text that van Fraassen wrote on whether electrons are real or not.”

[5] “However, there is also an important difference between standard if-then-ism and a category-theoretic approach in terms of the ontological commitments involved, as Awodey points out. According to standard if-then-ism, any mathematical statement can be translated into a universally quantified conditional statement, where the quantifiers are effectively meta-theoretic in nature, ranging over all set-theoretic systems of the right type. As such, the approach presupposes a rich ontology of sets in which such systems can be constructed. In contrast, along category-theoretic lines mathematical theorems do not involve such ontological commitments. There is no implicit generalization over the Bourbaki structures of a theory, e.g., over all groups, rings, or number systems. Rather, a mathematical theorem is “a schematic statement about a structure […] which can have various instances” (Awodey 2004: 57). These instances remain undetermined on purpose, unless a further specification of them is needed for the proof of the theorem in question.”

[6] “A rejoinder should be added, however. McLarty’s and Awodey’s claim that all mathematical properties expressible in categorical set theory are isomorphism invariant has been contested, e.g., in Tsementzis (2017). In fact, Tsementzis argues that neither ZFC nor ETCS provide fully structuralist foundations for mathematics, since their respective languages do not, after all, exclusively allow for the formulation of invariant properties. Then again, both Makkai’s FOLDS system (Makkai 1995, Other Internet Resources, 1998) and the Univalent Foundations program developed in Homotopy Type Theory (Univalent Foundations Program 2013) seem to meet this condition.”


[8] “Essentially, we are assuming that the mathematics founded in constructive logic that we study is no worse off as regards ontological questions than any other branch of mathematics on any other foundation. Further, we assume that whatever solutions can be found to philosophical problems regarding ontology of mathematics in other settings can equally well be applied here, in a way that is compatible with the foundational assumptions we’ve made. HoTT fits particularly well with structuralism in the philosophy of mathematics (see Steve Awody ’‘Structuralism, Invariance and Univalence’) but we do not pursue this in the present work.”








[16] Ibid. “To give a topology (sometimes called a Grothendieck topology) on C means to specify, for each object U of C, families of maps (Ui → U)i∈I, called covering families, enjoying properties analogous to those of open covers of an open subset of a topological space, such as stability under base change and composition (see [SGA 4 II 1.3] for a precise definition). Once a topology has been chosen on C, C is called a site, and one can define a sheaf of sets on C in the same way as in the case in which C is the category of open subsets of a topological space: a sheaf of sets E on C is a contravariant functor U → E(U) on C (with values in the category of sets) having the property that for any covering family (Ui → U)i∈I, a section s of E on U, i. e., an element of E(U), can be identified via the “restriction” maps with a family of sections si of E on the Ui’s that coincide on the “intersections” Ui ×U Uj. A topos T is a category equivalent to the category of sheaves of sets on a site C (which is then called a defining site for T).”



[19] Ibid







[26] Introduction to Topology, 2nd edition, Gamelin & Greene, pg. 188

[27] Jean Piaget, Genetic Epistemology, Chapter 1

[28] Rene Thom,







[35] “Gilbert Simondon: une pensée de l’individuation et de la technique (1994), the proceedings of the first conference devoted to Simondon’s work, further charts his influence on such thinkers as François Laruelle, Gilles Châtelet, Anne Fagot-Largeau, Yves Deforge, René Thom, and Bernard Stiegler (the latter having placed Simondon’s theory of individuation at the very heart of his ongoing and multi-volume philosophical project).”









[44] Deleuze, Difference & Repetition, pg 171–172





[49] “Where v is velocity, and x, y, and z are Cartesian coordinates in 3-dimensional space, and c is the constant representing the universal speed limit, and t is time, the four-dimensional vector v = (ct, x, y, z) = (ct, r) is classified according to the sign of c2t2 − r2. A vector is timelike if c2t2 > r2, spacelike if c2t2 < r2, and null or lightlike if c2t2 = r2





[54] Mathematical Structuralism, pg. 2 (Geoffrey Hellman and Stewart Shapiro)

[55] Mathematical Structuralism, pg. 2 (Geoffrey Hellman and Stewart Shapiro)


[57] Mathematical Structuralism, pg. 3–4 (Geoffrey Hellman and Stewart Shapiro)



[60] Philosophy of Mathematics: Structure and Ontology, Stewart Shapiro, pg. 163

[61] Philosophy of Mathematics: Structure and Ontology, Stewart Shapiro, pg. 163

[62] “Over time the founding members gradually left the group, slowly being replaced with younger newcomers including Jean-Pierre Serre and Alexander Grothendieck. Serre, Grothendieck and Laurent Schwartz were awarded the Fields Medal during the postwar period, in 1954, 1966 and 1950 respectively. Later members Alain Connes and Jean-Christophe Yoccoz also received the Fields Medal, in 1982 and 1994 respectively.”





[67] “If a line segment intersects two straight lines forming two interior angles on the same side that sum to less than two right angles, then the two lines, if extended indefinitely, meet on that side on which the angles sum to less than two right angles.”






[73] Mathematical Structuralism, Hellman & Shapiro, pg. 43, Cambridge Uni Press











[84] Mathematical Structuralism, Hellman & Shapiro, pg. 44, Cambridge Uni Press

[85] Mathematical Structuralism, Hellman & Shapiro, pg. 43, Cambridge Uni Press






[91] “More precisely, Lebesgue measure is defined on some subsets of R3, but it cannot be extended to all subsets of R3 in a fashion that preserves two of its most important properties: the measure of the union of two disjoint sets is the sum of their measures, and measure is unchanged under translation and rotation. The pieces in the Banach-Tarski decomposition are not Lebesgue measurable. Thus, the Banach-Tarski Paradox gives as a corollary the fact that there exist sets that are not Lebesgue measurable. That corollary also has a much shorter proof (not involving the Banach-Tarski Paradox) which can be found in every introductory textbook on measure theory, but it too uses the Axiom of Choice…Here is a brief sketch of that shorter proof: Work in “the real numbers modulo 1” — that is, the number system that you get if you cut the interval [0,1) out of the real line and loop it around into a circle, so that 0 and 1 are the same number. (Like the way that 0 and 12 are the same on a circular clock.) In that number system, multiplication and division don’t really work very well any more, but addition and subtraction still work fine, and so does Lebesgue measure. Let’s call that number system T; its entire measure is 1. Now, the Axiom of Choice is used to “construct” a rather peculiar subset of T — let us call it C — with the property that the sets C+r = {x+r : x in C} are all disjoint from each other, for different values of the rational number r. The union of these sets is all of T. Now, if C were measurable, then so would each C+r, and they would all have the same measure, and their measures would add up to the measure of T — that is, they would add up to 1. But how many of these C+r are there? There are a countable infinity of them. If the measure of C were zero, their sum would be zero. If the measure of C were positive, their sum would be infinite. You can’t get 1, either way”

[92] “Arguments from physics may not help. Here is Bryce DeWitt reviewing Stephen Hawking and G.F.R. Ellis using the axiom of choice in 1973: The book also contains one failure to distinguish between mathematics and physics that is actually serious. This is in the proof of the main theorem of chapter 7, that given a set of Cauchy data on a smooth spacelike hypersurface there exists a unique maximal development therefrom of Einstein’s empty-space equations. The proof, essentially due to Choquet-Bruhat and Geroch, makes use of the axiom of choice, in the guise of Zorn’s lemma. Now mathematicians may use this axiom if they wish, but it has no place in physics. Physicists are already stretching things, from an operational standpoint, in using the axiom of infinity. It is not a question here of resurrecting an old and out-of-date mathematical controversy. The simple fact is that the axiom of choice never is really needed except when dealing with sets and relations in non-constructive ways. Many remarkable and beautiful theorems can be proved only with its aid. But its irrelevance to physics should be evident from the fact that its denial, as Paul Cohen has shown us, is equally consistent with the other axioms of set theory. And these other axioms suffice for the constructions of the real numbers, Hilbert spaces, C* algebras, and pseudo-Riemannian manifolds–that is, of all the paraphernalia of theoretical physics. In “proving” the global Cauchy development theorem with the aid of Zorn’s lemma what one is actually doing is assuming that a “choice function” exists for every set of developments extending a given Cauchy development. This, of course, is begging the question. The physicist’s job is not done until he can show, by an explicit algorithm or construction, how one could in principle always select a member from every such set of developments. Failing that he has proved nothing.”













Collective voice: Sonic, textual, visual communications and actions —