Intuitionism and Neurophenomenology

Brian McCorkle
Orthogonal Research and Education Lab
9 min readSep 22, 2022
A young Francisco Varela COURTESY: Gaian Systems Planetary Cognition Lab.

Brian McCorkle is a Google Summer of Code scholar, research associate, and project evangelist. This research is part of the Cognition Futures initiative and will be featured at Neuromatch 5.

Orthogonal Research and Education Lab logo.

This blog post will detail the parallels between Francisco Varela’s Neurophenomenological program in Cognitive Science and Luitzen Egbertus Jan (LEJ) Brouwer’s Intuitionistic Mathematics. Specifically, we will focus on both the trajectory of Type Theoretical constructions in Mathematics and the formal aspects of Varela’s earliest and latest works. This post is inspired by readings done with Orthogonal Research and Education Laboratory’s Cognition Futures Reading Group. We have covered the books “Ecology of the Brain” (Thomas Fuchs) and “On Becoming Aware” (Natalie DePraz, Varela, Pierre Vermensch). In conjunction with Yohan John’s reading group, I have also gotten into AA Cavia’s “Logiciel: six seminars on computational reason”.

While reading the literature on Neurophenomenology in the Cognition Futures Reading Group (with OREL members Amanda Nelson, Bradly Alicea, and Jesse Parent) our discussions often turned to functionalist and reductionist patterns in Cognitive Science. In particular, Martiny’s book “Varela’s Radical Proposal” was the impetus for going down this path, as Martiny takes the Neurophenomenological project to task and wonders if it indeed has any possibility of achieving its stated aims.

Screenshot of the Cognition Futures Reading Group in action. Fill out this form if you are interested in joining us!

As an approach to Cognitive Science, Neurophenomenology was proposed by Varela in 1996 as a methodological remedy for the hard problem [1] as a response to David Chalmers. Neurophenomenology marries modern cognitive science with a disciplined approach to human experience. Varela stresses that a disciplined approach the tradition of phenomenology is the missing piece of the hard problem, especially when viewed in conjunction with the work of Merleau-Ponty. Using this Varela hopes to privilege experience and its interpretation as a validation method for “both subjective reports and empirical data.” (Martiny, 2017)

Luitzen Egbertus Jan Brouwer, from https://www.jstor.org/stable/769295

Varela and Brouwer look to both historical logic and philosophy in order to ground their arguments, almost as if trying to look back to see what contemporary theorists missed or are missing. In Varela’s case (with Eleanor Rosch and Evan Thompson), “The Embodied Mind” explores the connections between Buddhist thought and possibilities and a new kind of Cognitive Science, an “enactive” approach. Brouwer and his colleagues edited a series in the mid-20th century called “Studies in Logic and the Foundations of Mathematics” — attempting to find corollaries for their innovations through history, specifically in the Stoic and Megaric schools of Greek Logic.

“The Other Euclid” from https://en.wikipedia.org/wiki/Euclid_of_Megara

Intuitionistic Type Theory was eventually formalized into Martin-Löf Type Theory, which corresponds closely to a programming language (similar to Backus-Naur Form). The basic idea is that nothing is assumed, especially the law of the excluded middle (which states that there can be A or ~A, but not both). This leads to a mathematics of judgements, witnesses, and contracts — a mathematics of legality, leading eventually to the Coq proof helping software (more on this later). Martin-Löf requires explicit description and rigorous proof of the experience of mathematics, similar to the way that Varela uses Phenomenological methods to articulate our experience of cognitive processes. For an excellent summation of intuitionism: https://youtu.be/rHeUr178Y1Q

Per Martin Löf, from https://www.ias.edu/scholars/martin-l%C3%B6f

“Coq is an interactive theorem prover first released in 1989. It allows for expressing mathematical assertions, mechanically checks proofs of these assertions, helps find formal proofs, and extracts a certified program from the constructive proof of its formal specification.” — Wikipedia

COQ’s logo, from https://github.com/topics/coq?l=coq&o=desc&s=

Famously, Vladimir Voevodsky used Coq to construct his Homotopy Type Theory (HoTT), which extends Martin-Löf to several “dimensions of logic” — of which traditional logic is 1-dimensional, set theory is 2-dimensional, groupoids/category theory is 3-dimensional, and so on. This radical step is made possible by creating geometric structures out of logical ones, treating each logical element as a node or vertex, and each logical connection as an edge. Those familiar with topological arguments will recognize this type of thinking, the correspondence between abstract mathematical structures and a landscape which can be approached analytically, and HoTT enables us to do just this with its logic of homotopy types. This is not the place to go more into detail on this aspect of HoTT, but see Footnote [2] for more information on the subject.

from “Foundations of Mathematics: a new proposal based on Homotopy Type Theory” by Irene Bosco https://www.researchgate.net/publication/327867674_Foundations_of_Mathematics_a_new_proposal_based_on_Homotopy_Type_Theory

HoTT enables us to construct orders of mathematical experience comparable to DePraz et al.’s orders of experience and validation, and further comparable to cybernetic orders of observation. In these constructions it could be enough to produce a diagram of the orders from each field side by side, for example: (1) formal/boolean logic, first person experience, and first order cybernetics, (2) set theory, second person experience, and second order cybernetics, (3) category/groupoid theory, third person experience, and third order cybernetics. Again we reach beyond the scope of this blog post suggestively, and will not tie up this loose end, only gesture at it as a possible way forward. Imagine if we could apply the topographical innovations of HoTT to Neurophenomenology, to cybernetics! What insights could such analyses yield?

The Modes of Validation, “On Becoming Aware” DePraz et al., 2004, p. 86

AA Cavia’s “Logiciel” [3] follows in the footsteps of Varela by advocating for computational thinking via theoretical mathematics that privileges experience and subjectivity over notions of the empirical or objective. Prompted by Yohan John’s interest, “Logiciel” has sent me down this particular research path, for which I’m very grateful. In particular, Cavia suggests that Voevodsky’s Univalence Principle of Homotopy Type Theory allows for interdisciplinary explorations of computation and reason through the shared experiences of structural characteristics across fields.

“Logiciel” AA Cavia, 2022 from https://tripleampersand.org/books/logiciel/

If there were to be a synthesis between Brouwer and Varela, then this could be quite exciting indeed. The intuitionistic approach allows for first, second, and third person approaches, for example, to be baked into the reasoning process, since that process is defined by the Intuitionist in the selection of and definitions of witnesses and judgements defining the Mathematical assumptions to be used to calculate (or compute, in our case). I am imagining a computer part of whose circuits are tied into some organic process, electrodes in the mud. A connection to a human social media profile, reading the actions to stimulate logic driven by embodiment, searching for scraps of it. Could it also truly be a logic of the sky? Cameras aimed at clouds, utilizing different formations for probability distributions of computational reasoning processes.

A camera aimed at a cloud. COURTESY: Wiktonary

But this is all wrong, embodiment is specifically not a cloud, but a sort of participation in, a witnessing of, the moment of computation. Having to define terms and proceed with awareness is precisely what Intuitionist mathematics also proposes. We can reference Varela’s own “A Calculus for Self-Reference”: “The principal idea behind this work can be stated thus: we choose to view the form of indication and the world arising from it as containing the two obvious dual domains of indicated and void states, and a third, not so obvious but distinct domain, of a self-referential autonomous state which other laws govern and which cannot be reduced by the laws of the dual domains.” This reference is not as clear on first reading as I hoped it would be, so I will explain a little: Varela’s dual domains of indication and void indicate or give birth to the irreducible/invariant autopoetic state, which is equivalent to Brouwer’s alternative to the law of the excluded middle. It is not that mathematical objects exist or do not, but why they exist, and this is the crux of the parallel between Brouwer and Varela. Brouwer and the progeny of Intuitionistic Mathematics construct formalisms which are chosen for a specific situation. Varela and the Neurophenomenologists look for access to the site of choosing, the irreducible discerning consciousness that walks between the laws of classical logic.

“A Calculus of Self-Reference” Francisco Varela, 1976, p. 7

I have thus far briefly reviewed the different conceptual strands of Intuitionist mathematics and Neurophenomenology as represented by the field’s most charismatic figures: LEJ Brouwer and Francisco Varela, respectively. AA Cavia and Brian Cantwell Smith have also engaged with the problematic aspects of computation and attempted to reconcile these with our experience of the world. In particular, Brian Cantwell Smith poses the most interesting problems in this area and lays out their possible answers, even where no actual answers can be presented. Smith’s engagement with the field is similar to Hilbert’s, laying out a series of fascinating and controversial issues to be researched by the field. For Brian Cantwell Smith, the problem of significance takes central importance: how do we imbue computation with awareness of what it is computing means?

The table of contents of Cantwell Smith’s unfinished “Age of Significance” (2010-present). COURTESY: Internet Archive.

We end up with four interrelated intellectual threads: Intuitionistic Math, Neurophenomenology, Types and Computation, and the Significance of Computation. In the spirit of von Neumann’s successful efforts in the 20th Century to combine Schroedinger and Heisenberg’s wave functions and matrix mechanics, I would approach these four threads with the goal of unifying them, but will stop short in this blog post, gesturing to future research programs that (to my knowledge) do not yet exist. As the coefficients of these wave functions relate to the values of matrices and thereby permit differing views to achieve the same results, it is possible that these foundational theories and approaches have a common dimension.

The Schrodinger Cat thought experiment. [Image source: By Dhatfield — Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=4279886]

If we seek a commonality, we can start with all four views being the product of human cognition and exist solidly in the realm of human endeavor. As examples, Brouwer’s constructivist project demands humans to carry it out, Varela’s project depends on the communicated experience of thinking entities (at the moment solidly human), Cavia reaches for an intuitive understanding of computation, and Cantwell Smith points to the necessity of meaning. But isn’t all theory/concept/philosophizing/what-have-you human? What is it about humanity that these four views have in common? Isn’t the fact of humanity a triviality?

“virtual humanity” — image from Martina Menegon’s “when you are close to me I shiver” (2020) — https://lesailleurs.art/en/oeuvre/when-you-are-close-to-me-i-shiver/

The Intuitionist program demands an almost legal process: a judgment, a witness, which must be facilitated by a human for a purpose. Neither they nor the Neurophenomenologist are involved in uncovering or discovering fundamental truths. Rather, they are articulating their experiences of consciousness, similarly to how an Intuitionist is articulating the assumptions and observations which make up their math.

L.E.J. lecturing at Princeton in 1953 — https://twitter.com/J_H0USTON/status/1070281439910993922/photo/1

There is a research program here, one that could open up the field of enactive Cognitive Science and that of Foundation Mathematics. This research program could dramatically increase the rigor and applicability of many different research areas. In my overexcited mind, this is also key to the future of computation. We need a strong theory for how, why, and what we compute in order to deal with the difficult challenges presented by modern computing.

Footnotes:

[1] The hard problem: why and how qualia (or phenomenology) emerge from the physical aspects of perception (systems that discriminate, differentiate, and integrate units of information).

[2] Further information on HoTT: the lecture Univalent Foundations of Mathematics by Vladimir Voevodsky (YouTube video) and the book Homotopy Type Theory (the HoTT book) from the Univalent Foundations Program at the Institute for Advanced Study.

[3] Logiciel: etymological origins place the logiciel in opposition to the material. Definitionally, logiciel is software instantiated in the form of encoded computer instructions.

--

--

Brian McCorkle
Orthogonal Research and Education Lab

Composer, programmer, performer, researcher, and dabbler in almost every thing.