Awakening from the Meaning Crisis Part 26–32

4E Cognitive Science & Relevance Realization

Matthew Lewin
26 min readNov 5, 2023

--

Welcome back to Awakening from the Meaning Crisis notes. If you missed Part 20–25 click here. Notes on Dr. John Vervaeke’s Awakening from the Meaning Crisis. Link to full series here.

Part 26: Cognitive Science

Science of Cognition is Interdisciplinary:

The science of cognition is the study of the machinery of meaning realization, and the cognitive processes at work within it. Holistically, it studies ‘the mind’.

  1. Neuroscience studies it at the brain level,
  2. Computer scientists working on A.G.I. and machine learning are studying it at the information processing level,
  3. Psychologists study it at the behavior level,
  4. Linguists study it at the language level,
  5. Anthropologists study the networking of minds at the cultural level

But this fractures the term mind and makes us prone to equivocation. This is when you fall into confusion because you do not keep track of the meaning of your terms.

To study the mind we need to get the different disciplines to integrate in some way.

Philosophy can help us get integration, because it is the discipline that has us take conceptual care to try and articulate the meaning of our terms and bridge between different vocabularies, ontologies, methodologies, etc.

The discipline that tries to come up with a philosophically astute integration between these disciplines so that we can avoid equivocation and deal with fragmentation and overcome the ignorance of the causal relationship between the levels is: cognitive science. It’s an interdisciplinary science. Thus, it is the science that is going to talk about the meaning making process — because each level is about that.

What People Mean When They Say Cognitive Science:

Generic Nominalism:

  • Some people refer to the cognitive sciences, which is an example of generic nominalism (e.g. anthropology is one of the cognitive sciences, machine learning, neuroscience, etc.).
  • But this use of the term doesn’t help us in our attempt to integrate, avoid equivocation, overcome our ignorance, etc. So we should reject this as the sole meaning of what cognitive science is doing.

Interdisciplinary Eclecticism:

  • Some people understand “cognitive science” as a kind of interdisciplinary eclecticism.
  • That it lets people pick and chooses from different disciplines to use as needed. An analogy for this could be an interfaith dialogue.
  • The problem with this is it’s either too weak or too strong. Strong and transformative insights aren’t being passed between the disciplines.

Synoptic Integration:

  • The third of cognitive science is synoptic integration.
  • It says: we need to build something between the disciplines that addresses the equivocation, deals with the fragmentation, and fills in the ignorance.
  • It acknowledges that they’re not all saying the same thing, but their also not saying different things. It uses a bridging vocabulary that integrates aptly across the disciplines.

“Cognitive science attempts to create constructs with multi-aptness. A balance between identity and difference that affords and provokes insightful transformation of the theorizing from one discipline to another.”

Plausibility & Trustworthiness:

What is constraining us in this? Plausibility.

The word ‘plausible’ has two meanings:

  1. As a synonym for high probability, which is not what we mean here, but rather
  2. As a synonym for reasonable: making good sense; deserving to be taken seriously.

There are 2 characteristics of plausibility:

1. We want a construct that is elegant, which refers to more than just simplicity and is about producing a variety of explanations i.e. multi-aptness.

2. But we also want convergence: a construct that has been created by many convergent, independent lines of investigation too.

Unbalance Between These:

  1. If you have an explanation that is elegant and produces lots of different explanations and is multi-apt but doesn’t have convergence, so there’s lots of bias and lack of trustworthiness, what do you have? Conspiracy theories. They’re a form of bullshitting. It’s far-fetched.
  2. Where you have a lot of convergence but very little insight or integration being produced? Triviality. It’s not false, it just has no transformative power, it makes no difference.

Solving for this we need a theory that has both elegance and convergence. To do this over Parts 27–33 we are going to develop an understanding of the cognitive processes at work within the machinery of meaning cultivation (outlining a theory called ‘relevance realization’) — which is tied to general intelligence and problem solving.

What is Meaning Cultivation:

Cognitive science is trying to bring about profound synoptic integration that addresses equivocation, fragmentation, and ignorance.

Meaning-making (too romantic) + meaning-seeking (too empiricist) = meaning cultivation

Meaning isn’t something we willfully impose on the world (a mistake from our history), and meaning isn’t something we find in the world (that is too ignore the scientific revolution)

“Meaning is something between us, the way you cultivate a plant.”

Our core capacity for meaning cultivation is intelligence and general problem solving.

Intelligence:

Intelligence is the capacity that makes you a cognitive agent, whose cognition is working with meaning.

General Problem-Solver:

One way to frame intelligence is in terms of being a general problem-solver.

We want to keep intelligence separate from knowledge. If you make them synonymous, then you can’t use knowledge to explain intelligence. It becomes circular, non-explanatory.

What is it to solve a problem?

A problem is when there’s a difference between the state you’re in (initial state) and the state you want to be in (goal state).

Problem Space

“To solve a problem is to have a sequence of operations that can transform the initial state into the goal state while obeying the path constraints, preserving me as a general problem solver.” This could be considered a problem space.

In Part 27, we will look at problem formulation closer.

Part 27: Problem Formulation

Two things to note about the problem space diagram

  1. All the possible paths haven’t been drawn out (and this was on purpose)
  2. It’s misleading because it’s been created from a God’s-eye point of view. Having a problem means to be acting from the POV of the initial state, not above things. You’re ignorant of the path that will get you to the goal state.

You might then say “So what? Why have this diagram?”

Chess Games and Combinatorial Explosion:

You can use the diagram to calculate the number of pathways: F^D

(F is the number of operators at each state, raised to the D power with D being the number of stages you go through)

  • This works when analyzing a chess game, for example, and the number of pathways is ³⁰⁶⁰. This is called combinatorial explosion.
  • This is an astronomically huge number. It’s greater than the number of atomic particles that are estimated to exist in the known universe. This means you cannot search the whole space.

Instead, our brains will begin searching in a tiny subsection of the whole space and will often find a solution. You’re able to immediately zero in on the relevant information.

How do we do this? Even the fastest chess-playing computer can’t check the whole space. How we avoid this combinatorial explosion is a central way of understanding intelligence. Part of what’s involved is the generation of obviousness, but how does your brain make things obvious to you? You’re constantly restructuring what you find relevant and salient.

In many ways this is the key problem that AGI research is trying to address right now. There has also been a wrestling with the distinction (partly due to initial work by Polya in a book called How To Solve It) between a heuristic and an algorithm.

Heuristic vs Algorithm:

Algorithm:

An algorithm is a problem-solving technique that is guaranteed to find a solution or prove that a solution can’t be found.

Since it relies on ideas of certainty, there’s a problem with this: in order to be certain, you have found the answer or that one can’t be found, how much of the problem space do you have to search? Well, to guarantee certainty you must search all of it.

Deductive logic is also algorithmic and works in terms of certainty. So does math. Which means you cannot be comprehensively logical.

“Trying to equate rationality with being logical is absurd.” Rational (Note the etymology: ratio, rationing…) means knowing when, where, and how much, and what degree to be logical. Which is a much more difficult thing to do.

Heuristic:

A heuristic is a problem-solving technique that is not guaranteed to find a solution, but is reliable for increasing your chances of achieving your goal.

They work by trying to prespecify where you should search for the relevant information. This is what makes heuristics a sort of bias. They bias where you’re paying attention.

This is known as the “no free lunch” theorem: it’s unavoidable, you have to use heuristics in order to avoid combinatorial explosion, and the price you pay is falling prey to bias. “Again: the very things that make us adaptive are the things that make us prone to self-deception.”

The Naturalistic Imperative:

This work so far on problem formulation is by Newell and Simon. They are taking a complex phenomenon and trying to analyze it down to its basic components. And like Descartes they’re trying to formalize and mechanize it.

So, what Newell and Simon are trying to take a mental term (“intelligence”) and try to explain it using non-mental terms (analyze-formalize-mechanize). This exemplifies the scientific method.

If cognitive science can give a synoptic integration by creating plausible constructs, then it creates the possibility of making us finally be part of the scientific worldview — “not as animals or machines but giving a scientific explanation of our capacity to generate scientific explanation.”

We must do work to extend this view though.

Critiques of Newell & Simon — Essence:

Their notion of heuristics, while necessary, is insufficient.

  1. They failed to recognize other ways in which we constrain the problem space and zero in on relevant information in a dynamically self-organizing fashion.
  2. They failed to notice that they had an assumption: that all problems are essentially the same. This is kind of ironic.

We have a heuristic of essentialism: that when we group a bunch of things together with a word they must all share some core properties. An essence. Some things fall into this (“triangles” for instance all share an essence, certain features).

But not everything we group together has an essence (Wittgenstein pointed this out).

Wittgenstein used the example of games. We call many things “games.” Not all involve competition, or other people, or imagination, or pretense… you won’t find a definition that includes all and only games.

Many categories don’t have an essence. Essences allow us to generalize though, which is why we look for them. And generalizations can help us make very good predictions.

Newell & Simon thought that all problems are essentially the same, which means they only needed to find one essential problem-solving strategy, and that how you formulate a problem is therefore trivial.

All problems are not essentially the same.

Different Types of Problems:

A central one is the distinction between well-defined problems and ill-defined problems.

The example of 33+4 is a well-defined problem. Since our education is full of well-defined problems we tend to think this is what most problems are like.

Most of our problems are actually ill-defined problems, where we don’t know what the relevant information about the initial state (or goal state) is, or what the relevant operators are. Or even what the relevant path constraints are.

Example: Take good notes. Follow a conversation. Tell a joke. Go on a successful first date. How would you code a computer to do this algorithmically

“What’s actually missing in an ill-defined problem is how to formulate the problem. How to zero in on the relevant information and constrain the problem so you can solve it.” Relevance realization.

Good problem formulation is related to transcending the current framing with insight as we have discussed before.

In Part 28, we will introduce a key idea of Vervaeke’s work on what this zeroing in on relevant information is — Relevance Realization — and then go on to explore it in further in Parts 29–33.

Part 28: Convergence of Relevance Realization

So how do we zero in on relevant information? How does this relate to intelligence and being a general problem solver?

Categorization:

“Your ability to categorize thing massively increases your ability to deal with the world.” to make predictions, extract potentially important information, to communicate…

A category is not just a set of things, it’s a set of things that you sense belong together. How is it that we categorize things? We may not be able to answer that fully, but this notion of Relevance Realization is at the center of it.

Similarity and Categorization:

In logical terms, ‘similarity’ means partial identity or sharing features.

The philosopher Nelson Goodman (1906–1998) argues that if we agree with this definition, then any two objects are logically similar. For example, a bison and a lawnmower share many properties such as being found in North America, containing carbon, and having an odor.

However, what is considered ‘relevant’ or important properties. This shift from logical to psychological accounts of similarity occurs when we look for the relevant properties that stand out to us as salient.

What matters for psychological similarity is not for any true comparison, but finding the relevant comparisons. (This same thing happens when you decide two things are different)

Example of Problem Solver Robot:

Let’s say we were building a sophisticated robot or machine — an agent that can determine the consequences of its behavior and change its behavior accordingly.

And we give it a problem:

  • We give it a wagon with a handle, and on it is a battery. And much like humans or animals who acquire food, the robot is inclined to take the battery elsewhere before consuming it.
  • Alongside the battery in the wagon is a lit bomb.
  • The robot decides to pull the handle and bring the battery along (because it has determined that that is the intended effect of pulling the handle), but the bomb eventually goes off and destroys the robot.

What did we do wrong?

  • We only had the robot look for the intended effects of its behavior, we didn’t have it look for side effects.

Adding Complexity:

  • To account for side effects, we give the robot more computational power, sensors, and a black box to monitor its actions. However, when we put it back in front of the wagon, it does not do anything.
  • This is because the robot is computing all the possible side effects, which are combinatorically explosive. For instance, if it pulls the handle, it will make a squeaking noise, turn the right wheel a certain amount, turn the other wheels, cause a slight wobble due to a skew in the axle, indent the grass underneath the wheels, and alter the position of the wagon with respect to Mars.

So, assume we come up with a definition of relevance (which Vervaeke will outline later in the course it is impossible).

Adding Simple Relevance Realization:

  • Let’s say we give the robot this definition for relevance we created. but it still goes up to the wagon with the battery and the bomb and just sits there calculating. When we look inside the black box we see it’s been making two lists — relevant vs. irrelevant — and it’s checking everything and filing it under irrelevant.

In reality, what we’re doing as humans isn’t filing things into relevant and irrelevant, we’re ignoring (somehow) what’s irrelevant and just zeroing in on what’s relevant.

(This whole thing re: the problem of the proliferation of side effects in behavior. a.k.a. The Frame Problem, and even if you get past it you’re left with this subsequent problem of having to file everything into relevant vs. irrelevance which is known as The Relevance Problem.)

Relevance Realization:

So, what then are we doing? This is the notion of Relevance Realization that we will discuss next.

What if when we’re talking about “meaning” we’re talking about how we find things relevant to us. To each other. To part of ourselves, how we’re relevant to the world and how it’s relevant to us…

In Part 29, we will start to build an understanding of what relevance realization is. To start we can try find a scientific theory of relevance.

Part 29: Relevance Realization

Trying to Find a Scientific Explanation of Relevance:

Trying to find a scientific explanation of relevance is plagued with difficulties.

The main mistake: arguing in a circle. Whatever we come up with to explain relevance cannot presuppose relevance for its function

There are three ways of explaining relevance:

  1. Representations: (That there are things in the mind (ideas, pictures, etc.) that stand for or represent the world in some way.
  2. Computation: That it’s really a function of computational processes.
  3. Modularity: That there is a specific area of the brain dedicated to processing relevance.

Representation:

The issue with a representation explanation is that representations are aspectual (John Searle).

Aspectual:

  • When you form a representation of an object in your mind you do not grasp all the true properties of that object, because the number is combinatorically explosive.
  • Of all the properties you just select a subset. Which subset? Properties that are (wait for it:) relevant to you, and structuring them as co-relevant to each other.

So aspectuality deeply presupposes your ability to zero in on relevance. This means representations cannot be the causal origin of relevance.

Example Studies:

  • Zenon Pylyshyn did some interesting work on something called multiple object tracking
  • His studies show people can track about 8 different objects at one time, reliably
  • What’s really interesting: the more objects you track, the less and less features you can attribute to each object

“If I’m going to categorize things I need to mentally group them together.” This means relevance sits below this representational (i.e. semantic, or how your words refer to the world) — level.

(So far this is all consistent with reports of higher states of consciousness across cultures and through time, where people describe being in an eternal state of hereness and nowness and that its very nature is inexpressible, ineffable, and can’t be put into words.)

Computational:

Maybe the computational level can do a better job of explaining relevance realization for us. In the same way representations were about semantics, computation is at the syntactic level.

Syntax is about how a series of terms have to be coordinated together in some system. In language this refers to the grammatical rules.

Implication vs Inference:

One of the original defenders of the computational mind was Fodor, but he also had an important criticism. He pointed out that you have to make a distinction between implication and inference.

  1. Implication is a logical relationship (based on syntactic structures and rules) between propositions.
  2. Inference is when you’re actually using an implication relation to change your beliefs. And the thing about beliefs is that they have content.

Why does this matter? Because making an inference (changing beliefs) brings up the question of: what beliefs should I be changing?

The challenge lies in the explosive number of implications, requiring selective logical commitment due to cognitive limitations. Selection involves committing precious resources like attention, memory, time, and metabolic energy, making it a significant cognitive act.

Cherniak posits that intelligence stems from the ability to select relevant implications, influencing existing beliefs in a given context.

Logic Beyond Implications: Logic involves not just implications but also the rules governing them. Brown emphasizes that rules are propositions guiding resource allocation. Every rule demands interpretation, as it cannot specify all conditions within itself. Attempting to do so would make the rule unwieldy and impractical. Following a rule requires the skill of judgment, moving from propositional to procedural language. This shift is essential, given the limitations of explicit rule specification.

Situational Awareness: Exercising a skill depends on situational awareness, encompassing perspectival knowing and salience landscaping for adaptive and effective action.

The procedural knowing in skills relies on perspectival knowing, which, in turn, hinges on the fit between the agent and arena, generating affordances for action — termed participatory knowing.

Modularity:

Finally we get to the third candidate for explaining relevance, modularity.

This depends on a “central executive” function in the brain, but this won’t work because that would in itself depend on relevance realization. We’ve just pushed the problem back.

“Relevance realization has to be happening both at the feature level and the gestalt level, in a highly integrated, interactive fashion.”

Our account of relevance realization has to be completely internal, meaning: it has to work in terms of goals that are at least initially internal to the brain and emerge developmentally from it.

We hit a problem here.

In Part 30, we will go into the following argument: that we cannot have a scientific theory of relevance, and that this tells us something very deep about the nature of relevance and of meaning.

Part 30: RR meets Dynamical Systems Theory

Inductive Generalization:

There cannot be a scientific theory of relevance. Why not? It goes back to J.S. Mill (1806–1873) on how science works. That is, through inductive generalization.

The argument goes like this:

  1. Science is the process of studying things and then make predictions & claims that that will be the case for all of that type of thing.
  2. It gives us a powerful way of reliably predicting the world. (This isn’t meant to be an exhausting account of science and what it gives us, just a description of how it works)
  3. J.S. Mill pointed out that what this means is we need something called systematic import. That is, science has to form categories that support powerful (i.e. reliable & broad) inductive generalizations. To be able to do that is to have systematic import.

Things Categories Need for Systematic Import:

Essence:

  • One thing we need for this is category members to be homogeneous. All the members of the category have to share properties, since this is how we’re able to make an inductive generalization that other instances will also have those important properties. The idea is that this helps us get to the essence of what something is.
  • Wittgenstein and Quine both have important things to say about this idea of ‘essence.’ We also talked about this when we talked about Aristotle too.
  • Wittgenstein pointed out (we did this with the example of a ‘game’) that many of our categories don’t have essences. This isn’t to say that no categories have essences, just that some don’t. Triangles, for example, have essences.
  • But the essence of a triangle is mathematical. Quine argued that the essences of something like a triangle are deductive essences, but what science discovers is inductive generalizations. If powerful enough, science can give us the essence of something. (e.g. the essence of gold is the set of properties that will apply to all instances of gold.)

“Essentialism isn’t bad for things that have essences.” Things like games and tables don’t have an essence, but things like gold do. And that’s okay. But this means we can’t have a scientific explanation of everything. (e.g. we can’t have a science of ‘red things.’)

“It is correct to say there are many categories that we form for which we cannot generate a scientific theory or explanation precisely because those categories are not homogeneous. They don’t have an essence.”

Stable Category Membership:

  • If there’s a constant shifting of what kind of thing is to be included in a category then we fall into equivocation.
  • (e.g. the word ‘gravity’ used to mean falling down, as into a grave, and had to do with an important seriousness. But now we use the word to describe a mode of physical interaction/attraction.)

Intrinsic or Inherent Properties:

  • Many objects have properties that aren’t intrinsic but rather are in relation to us, and these are attributive properties.
  • (e.g. something being ‘money.’ Things that are money are money because we all attribute moneyness to them, and treat them as money. And so, we can all decide that maybe gold is no longer money but we can’t decide that it has a different mass or atomic number etc.)
  • The thing to remember is many things we think are intrinsic are actually attributive. e.g. A thing being a ‘bottle’ is actually attributive, as what makes it a bottle is the way it is related to me and my usage of it. Or think about Tuesdays. There’s nothing intrinsic about “Tuesdayness” but that doesn’t mean it exists in some alternate dimension or is made of supernatural ectoplasmic goo, it just means it’s a different type of category of things on which science can’t be performed.

Relevance does not have systematic import. Relevant events are like ‘Tuesday’ events. Saying something is “relevant” is the same type of claim as a category of ‘white things.’ Relevance also isn’t stable, because things that are relevant to you in one moment or situation may not be relevant in another.

And so, relevance is not something for which we can have a scientific theory. Relevance is not intrinsic to something. There can be no ‘essence’ to relevance. Nothing is essentially relevant.

Despite all this we can form a theory of how we realize relevance — relevance realization.

Relevance and a Metaphor of Evolutionary Fittedness:

Our theory of relevance realization cannot be a theory of relevance detection. e.g. Darwin’s (1802–1882) notion of evolutionary fitness.

  • What is it about an organism that makes it ‘fit’”?
  • To survive long enough to reproduce?

Vervaeke argues there is no essential design on fittedness. (Some are big, some are small, some are fast, some are slow, some have feathers and fly, etc.)

“The environment is so complex and differentiated and dynamically changing that niches — ways in which you can fit into the environment in order to promote your survival — are varied and changing.” Darwin’s insight was that there is no essence to design or ‘fittedness’. “Fittedness has to be in a constant process of re-designing itself in a self-organizing fashion.”

If we make relevance analogous to biological fittedness, we could think of relevance as cognitive interactional fittedness. We don’t need a theory of this, what we need is a theory of how this evolves.

Example:

Our attention is getting constrained, our sensing is feeding back into my acting and is integral to our moving, and so sensing and moving are in a constantly changing/adjusting feedback relationship. A sensory-motor loop. “What if there is a ‘virtual engine’ [in your brain] that is regulating that sensory-motor loop so that it is constantly evolving its cognitive interactional fittedness to its environment?”

What we need for a theory of relevance realization is a set of properties that are sub-semantic, sub-syntactic, can establish the agent-arena participation, with processes that are self-organizing, multi-scale, originally grounded in an autopoetic system… these properties are bioeconomical properties.

Body as a Bio-Economical System:

Think of your biology as economic. Not in the financial sense.

“An economy is a self-organizing system that is constantly dealing with the distribution of goods and services. The allocation and use of resources, often in order to further develop and maintain that economy.”

Your body is a bio-economy. Ultimately Darwin’s theory was a bio-economic theory. Economies are, very importantly, multi-scale. They work locally, globally, simultaneously bottom-up and top-down.

Embodiment of Cognition:

“There is a deep dependency between your cognitive agency as an intelligent problem-solver, and the fact that your brain exists within a bioeconomy.” “The body is an autopoietic bioeconomy that makes your metacognition possible.”

No body = no mechanism for the process of relevance realization

The biological fittedness of a creature is not a property of the creature per se. It’s a real relation between the creature and its environment. Fittedness is not a property of objectivity or subjectivity, it’s a property that is co-created in a dynamic, evolving fashion

Vervaeke argues that we should not see relevance as something we subjectively project, as the Romantic claims. Instead, that relevance is transjective. (Neither projected or detected, but realized)

Realization has two aspects to it: an objective sense (makes it real), and a subjective sense (coming into awareness). These two things represent the transjectivity of relevance realization. So, it is necessarily both embodied and embedded.

This is anti-Cartesian — the mind needs the body and vice versa.

“When I say internal, I don’t mean subjective. I don’t mean inside the introspective space of the mind. I mean internal to an embodied, embedded, brain-body system. An autopoetic system of adaptivity.”

What kind of norms are at work in a bio-economy, regulating things? Logistical norms. Logistics is the study of the proper disposition and use of your resources

Efficiency and Resiliency:

Logistical norms are things like efficiency and resiliency.

The autonomic nervous system is divided into two components: your sympathetic and your parasympathetic.

  • Sympathetic = biased toward looking for and interpreting evidence that you should raise your level of arousal.
  • Parasympathetic = the reverse — when you should lower your level of arousal. Notice that they are opposed in their goals, but also interdependent in their function.

They pull against each other dynamically to find the optimal amount. This is opponent processing.

This opponent processing means your level of arousal is constantly evolving to fit the environment. It’s not perfect, but it’s a powerful way to get optimization.

There is this same opponent processing for efficiency and resiliency.

Example: The problem is, if you reduce all the fat and have all the efficiencies then if one person is sick no one can pick up the slack, because everyone is working to the max. What if there’s an unexpected threat in the environment — a new threat or opportunity? “I have no resources by which I can repair, restructure, redesign myself.”

If you make a system too efficient you lose resiliency. They are in a tradeoff relationship. Resiliency is trying to enable you to encounter new things, and to deal with unexpected situations.

“What if I set up a virtual engine in the brain that makes use of this tradeoff relationship, that sets up a virtual engine between the selective constraints of efficiency and the enabling constraints of resiliency, and that virtual engine bio-economically — logistically — shapes my sensory motor loop with the environment so that it’s constantly evolving its fittedness?”This may be a scientific theory of how relevance evolves — of relevance realization.

In Part 31, we will outline more of these bio-economic dynamic processes in relation to relevance realization — and show how relevance realization is the basis for general intelligence.

Part 31: Embodied-Embedded RR as Dynamical-Developmental GI

What is some other transjective bio-economic opponent processes related to relevance realization?

Scope vs Applicability:

How would you want to make information processing more efficient?

You would want the functions — the processes — you’re using to be as generalizable as possible.

  • Compression: One thing we’ve learned about in statistics is the “line of best fit.” Drawing a line through a scatterplot allows us to interpolate & extrapolate — to make predictions. To go beyond the data. You can start to generalize. This is data compression, where you try to pick up on what is invariant and extend that.
  • Particularization: What about the opposite? This is where you create a function that over-fits in some sets and tries to keep with the data and stay more specifically in contact with the situation.

Compression tends to pick up on what’s invariant, but particularization tends to pick up on more variations. Contextable vs. context-sensitive, dynamically trading between one another, between efficiency and resiliency. a.k.a. scope vs. applicability. When these two

These two things (scope & applicability) are cost functions. This is about the scope of information.

Exploration vs Exploitation:

What about the timing of information?

The more you stay put the more opportunity cost you accrue. But the more you move around the less you can draw from the environment. So, you’re always trading between exploring and exploiting, and you can reward this either error reduction or error increase to keep this process going. This is known as cognitive tempering. It has to do with the projectability of your cognitive processing.

These examples aren’t exhaustive but they are exemplary of virtual systems that can adapt within constraints between the sensory-motor loop and the environment. Sometimes you’re focusing, and sometimes you’re diversifying.

Complexification and Self Transcendence:

Sometimes what makes something relevant is how it’s the same, how it’s invariant. Sometimes what makes something relevant is how it’s different, how it changes. And you have to constantly shift the balance between those because that’s what reality is doing.” Relevance realization is constantly navigating through opponent processing these different trade-offs in the dynamic bio-economy of the mind-body.

And when a system is self-organizing like this, there is no deep distinction between its function and its development. When a system is simultaneously integrating and differentiating information to help its dynamic movement and tradeoffs, it is complexifying. As systems complexify, they self-transcend. They go through qualitative development.

As self-transcendent systems complexify it leads to emergent abilities. (“When I was a zygote I could not vote. I could not give this lecture.”)

If you’re a relevance realizing thing, you’re inherently dynamical, self-organizing, auto-poetic thing, which means you are an inherently developmental thing, which means you are an inherently self-transcending thing. You have the ability for self-transcendence by optimizing relevance realization — something we will cover in later parts.

Relevance Realization as the Basis for General Intelligence:

An argument can be made that relevance realization is the underlying ability of ones’ general intelligence given their explanatory relationship.

“Your general intelligence can be understood as a dynamic developmental evolution of your sensory motor fittedness that is regulated by virtual engines that are ultimately regulated by the logistical normativity of the opponent processing between efficiency and resiliency”

Why Relevance Realization Matters:

The reason we are spending so much time on this is it is the linchpin argument of the cognitive science side of the series. Relevance realization is relating a lot of what we have talked about so far. It is likely embedded in “your procedural, perspectival, participatory, knowing, it’s embedded into your [transactional] dynamical coupling to the environment and the affordance of the agent arena relationship; the connectivity between mind and body, the connectivity between mind and world.”

We’ll see that we can use this machinery to come up with an account of the relationship between intelligence, rationality, and wisdom. We will be able to explain so much of what’s at the centre of human spirituality. We will have a strong plausibility argument for how we can integrate cognitive science and human spirituality in a way that may help us to powerfully address the meaning crisis.

In Part 32, we will explore relevance realization in relation to the brain and insight.

Part 32: RR in the Brain, Insight, and Consciousness

Continuing from the compression and particularization distinction with RR — we are arguing suggestive evidence of this.

Synchronization and Asynchronization of Neurons:

When neurons are firing together they’re doing something like compression (efficiency, assimilation) — synchronous firing is when a connection is made, “ah-ha” moments etc.

(There’s also increasing evidence that when human beings are cooperating in joint attention and joint activity their brains are getting into patterns of synchrony)

Neurons will synchronize and then go out of sync and back again in a rapidly oscillating manner. This is self-organising criticality (SOC).

Self-Organising Criticality:

This traces back to the work of Per Bak and the “sand pile.”

Sand naturally self-organizes into a mound (high level of order) until it reaches a critical phase where one grain triggers an avalanche and the entire system collapses. Some argue civilizations collapse similarly due to general systems failure.

After the avalanche, the sand pile spreads out, introducing variation and creating a bigger base.

This is what’s happening in the brain. Compression ⇄ Particularization. In milliseconds it’s evolving moment-by-moment. complexifying its structural-functional organization — its sensory-motor fittedness to the environment. It’s doing RR.

Vervaeke suggests that SOC can implement RR, which he equates to General Intelligence (g). Thus, we should observe a correlation between SOC and g.

Thatcher et al. found that there is a strong relationship between measures of self-organization and how intelligent you are. Specifically, the more flexible you are between synchrony-asynchrony the more intelligent you are. It demonstrates a kind of dynamic evolvability. (Not conclusive though)

Network of Neurons:

“We also need to think about not just how neurons are firing but how they’re wiring — what kind of networks they’re forming”

Graph theory or network theory has emerged as a way we can study networks, and it’s gotten very complex. But basically there are 3 kinds of networks (this is also a scale invariant):

Small World network is between the two. It’s optimal. It optimizes for efficiency and resiliency. It turns out there is increasing evidence that Small Worlds networks are associated with the highest functionality in your brain.

Firing is Self-Organizing Criticality and wiring is Small World Networks. The more it fires via SOC the more it wires via SWN. And vice versa. They mutually reinforce each other’s development.

Insight and RR:

The work of Stephen and Dixon connects this to insight by measuring the level of entropy in people’s processing.

Entropy increases right before insight and then it drops

This is plausibly evidence of self-organizing criticality. You’re breaking frame with the neural avalanche and then you’re making frame, like with the new mound that forms, as you restructure your problem-framing.

Featurization feeds up into foregrounding which feeds up into figuration.

RR is your participatory knowing. This feeds up into your salience landscaping which is your perspectival knowing, which gives you dynamic situational awareness. This opens up an affordance landscape for you, which gives you affordance obviation, and this is the basis of your procedural knowing — knowing how to interact. (We’ll come back later into how propositional knowing relates to all of this)

If all this is the case, you can think of your salience landscape as having 3 dimensions to it:

Centrality is the “here-ness,” time is the “now-ness,” and aspectuality is “together-ness”.

“A lot of the phenomenology of your consciousness is explained along with the functionality of your consciousness.”

In Part 33, we will look at how this links to our spirituality and completes the picture of relevance realization.

[For Part 33–39 click here]

--

--

Matthew Lewin

Studying a Masters in Brain and Mind Science at USYD. Interested in cognitive science, philosophy, and human action.