Thought Lengths: The Ray Model
A model of cognition that attempts to avoid the usual System 1/System 2 dichotomy (“the elephant and the rider”) while also providing a framework for critiquing and improving one’s thought processes.
If I say “Hi, how are you?” and you live in white middle class America, you’ll almost certainly say something resembling “Pretty good, you?” If I ask something like “What’s happened this week that you’ll remember five years from now?” I’ll get a response that’s a lot less predictable, but it’ll most likely be made out of words that I at least sort of understand.
There’s a lot going on in the space between question and answer, and thanks to the work of generations of psychologists and neuroscientists (and a few unlucky souls with iron rods through their brains and so forth), we’re getting closer and closer to having some clear/workable/reliable causal models.
We don’t have them yet, though, and while we’re waiting, it’s interesting to see what we can accomplish if we don’t even try. Call it a black box, and treat humans as complicated input/output devices with a whole bunch of levers and knobs—a stimulus goes in, some stuff happens under the hood, and a response comes out.
Beneath the Surface
My fellow input/output device Alex Ray took this idea and ran with it, positing what I’m calling the Ray model because I really like puns.
Let’s imagine the stimulus/response pattern as a ray or vector, and our minds as a surface. The external, sensory universe is everything above the surface, and the internal, cognitive universe is everything below (this is another one of those wrong-but-useful models). Something—say, a question—sparks a line of thought, and that line of thought leads to something else—like an answer.
If the stimulus/response doesn’t take very long (it’s an easy question, or a familiar motion like catching a tossed ball, or a visceral response like one’s reaction to a strong smell), then in our model the line will be short, as will be the distance between the input and the output.
If, on the other hand, there’s significant processing involved, then we can imagine a much longer line, and a greater distance between input and output:
In the example above, the thought process is fairly straightforward (at least for people who are comfortable with mental math). Once you’ve picked a strategy, it’s mostly just churning away until the calculation is complete.
There are plenty of stimuli, though, that don’t cause a straight march from stimulus to response, but instead send us all over our own minds, activating a large number of concepts and processes before finally cashing out to some new conclusion or action:
And furthermore, there isn’t always a single line. Sometimes, the same stimulus can spark multiple threads of thought, each of which will have its own length and path.
It’s also kind of fun to imagine what happens when things get subconscious, such as when we find ourselves making connections or entering emotional states that we can’t fully explain or justify. It’s pretty easy to imagine a second, deeper, opaque-ish surface that represents the limit of what we can “see” with our metacognition, but we’ll hold off on that for now, lest we summon the ogres.
From Science to Engineering
Okay, so—what’s this model good for?
First off, I’ve always been bothered by the popular wrong-but-useful division of human cognition into “system 1” processes (older, simpler, reflexive, effortless, fast, nonverbal) and “system 2” processes (more recent, more complex, reflective, effortful, slow, explicit). I’m fond of this model for how it ignores differences that don’t matter to me on a behavioral level (such as which areas of the brain are currently lighting up) and instead treats all kinds of thinking as being made up of the same fundamental particle.
*cough* I mean, ray.
Second, besides being a neat toy, it’s provided me with a very clear conceptual framework for how to change my broken thinking. At the Center for Applied Rationality, where I work, we teach a variety of different standalone rationality techniques—discrete mental movements that, if properly applied, can help mitigate one bias or another. Evaluating my own thoughts in terms of how long they are is starting to feel like another tool in that toolkit.
I can identify a variety of stimulus/response patterns that are too short—where I can reliably predict that I would reach more optimal outputs, given more time. Examples include:
- Sudden changes in plans, which cause me to grumble and grouse even if the new plan is better
- Unanticipated requests for my time or energy, which often lead me to commit to giving help I start to regret twelve seconds later
- Improper summarizing, where I’ll halo- or horns-effect people, plans, and activities and thereby lose opportunities to mix and match
CFAR canon has a handful of techniques (such as goal factoring or Gendlin’s Focusing) that are good at increasing the distance between input and output, and once I got “some thoughts are shorter than they ought to be” into my head as an organizing principle, I found myself reaching for those techniques more often and more appropriately.
Conversely, I have several thought patterns that are too long, such as:
- The amount of psyching up that I have to do before performing a parkour or freerunning movement that I’ve done thousands of times before without injury (but not in the past month! Oh, no!)
- Rumination loops on decisions and consequences that are firmly in the past and have no further lessons for me to learn
- Decision paralysis where the expected value of further investigation or weighing-of-the-options is far smaller than the cost in time and attention
…and again, there are techniques (such as propagating urges, narrativemancy, and certain trigger-action plans) that can help. Being able to think “oh, this is a line of reasoning that I should be able to skip to the end of, or at least cache somehow once I finish, so that I don’t have to rederive it every time,” has been a big net positive.
Finally (though this is a small benefit), the simple visual metaphor of moving the exit point for a given thought has helped with things like non-useful emotional triggering during intense conversation. The model has helped me recognize certain…golf holes?…geysers?…lava tubes?…where my thoughts tend to drift, and given me a clear way to evaluate potential replacements (“Is this new kind of ‘answer’ sufficiently far enough from the old answer that I won’t just sliiiiiiide right back into the old groove?”).
It’s neat that this model post-dicts a lot of things that make sense for entirely different reasons (such as slowly counting to ten before speaking, or rehearsing a given mental process until it becomes easy). As far as “tools you could teach a ten-year-old” go, I posit that this is one of the more powerful in terms of its sensibility and versatility.
Then again, I’ve been thinking that to myself so much that it just, y’know, pops to mind without much critical reflection. Let me know if you have any lengthy thoughts on the matter.