Pain is in the mind — Interoceptive Predictions and Painful Experience
The following is not evidence that philosophical zombies are real — just evidence that I can turn myself into one in order to turn something in that looks like what modern philosophy is supposed to look like. Like, it reads okay and should make sense if you’re into the lore. But I’m arguing with ghosts here, throwing an argument out into the aether and hoping someone out there says “oh hey, I didn’t know I was wrong to think about pain this way — now I’m going to do marginally better work as a result of this insight”. Anyway, here’s something I did for a grade for an audience of a few people who really seem to like this sort of thing.
Neuroscience and cognitive psychology are slowly but surely shifting away from behavioural stimulus-response models to predictive models. The now-fundamental stimulus-response model was summarised by John Watson, father of behavioural psychology: “To predict, given the stimulus, what reaction will take place; or, given the reaction, state what the situation or stimulus is that has caused the reaction” (Watson 1925). This is the traditional view, but neuroscience is refining and even overturning this — most notably for interoceptive modalities.
This new something has many names and rival theories, but the underlying assumptions predict a fundamental shift — that experience comes from within, not without¹. Pain is among these interoceptive modalities and provides a prime candidate for these predictive models. I’ll settle on a particular predictive model for this discussion on pain perception: Embodied Predictive Interoception Coding (EPIC)(Barrett & Simmons, 2015). EPIC describes an integrated brain that fields incoming sensory data — data which directly cause motor control circuits to prime themselves. These data are also used for error-checking, against what the brain is predicting the sense data should be, given available information, in order to maintain allostasis².
Sensory inputs constrain estimates of prior probability (from past experience) to create the posterior probabilities that serve as beliefs about the causes of such inputs in the present. (Barrett & Simmons, 2015)
The philosophy-friendly equivalent here is more difficult to find than it was with stimulus-response behaviouralism, which built representationalism up around that model. Strong representationalism is hard-pressed to accommodate this predictive model. Softer forms can of course contort themselves more easily to accommodate at least some of the wrinkles in EPIC processing, even if directionality is muddled, explanatory and predictive utility diminished.
I’ll now zero in on why this newer model for sensory input (EPIC) presents a thorny problem for strong representationalism. First, I restate the stock-standard strong representationalist position (which I’ll call the representationalist position for brevity, unless otherwise noted):
The content of perception is that which is conveyed to the subject through her perceptual experience. Representation is evaluated on the basis of how well the content — including its properties — is instantiated in the subject and is accessed through transparent experience. This experience is intentional by definition and the entirety of phenomenal experience can be captured in this manner. And the experience is the experience of something — the thing which generated the content. (Dretzke, 1996).
In the case of vision, we can readily provide a functional account for at least some of its outputs. Good vision requires a set of functional abilities. These include the detection of edges and lines, the differentiation of colour, and luminance detection — each of which we have reproduced (in alternative form) with perceptual proxies for our own eyes (e.g., cameras). Complete visual functionality also includes the execution of more cognitively demanding techniques — object recognition, identification, feature detection, etc.. In the end, these tricks and techniques are generally adequate to allow humans to move about in space, fulfilling their biological destiny. And while there are ever-present remaining concerns about capturing the phenomenal aspects of colour — such as the apparent valence of certain shades — we can more or less capture the biggest functionally obvious parts of colour perception without a perfect understanding of this phenomenal content.
Unlike colour perception, pain perception lacks obviously functional, intentional, representational content. The experience of pain is not a result of a representation of anything, except itself. To be sure, some kinds of pain experiences include a sense of location and severity, but pain is not the thing that is being represented — cell damage is, or seems to be. And it’s not pain that is being received — it’s nociceptive signals, sometimes³. But in the end, the function of pain is apparent. Pain carries negative affect and is motor-motivational. A painful experience is necessary for humans insofar as it informs us that we are in real and present danger of harm, including self-harm. Those without painful experience in response to nociceptive events have greatly shortened lifespans, in spite of their ability to cognitively conceptualise cell damage (Katz, 2001).
Instead of perceiving cell damage, we perceive pain. Surely a top-tier, focused sense for cell damage would avail us more than the vague, crowded, and low-resolution pain we experience, right? EPIC shows some promise in answering this question.
Embodied Predictive Interoception Coding
In Embodied Predictive Interoception Coding, there’s still a flow of data towards the brain, but the brain does not directly use this data to create perceptual experience. Instead, brain hardware runs its existing software based on past, present, and future (predictive) states and fields all sensory input through three primary channels, weaving together a single seamless experiential narrative. Barrett put it this way:
The goal is to minimize the difference between the brain’s prediction and incoming sensation (that is, the ‘prediction error’). This can be achieved in any of three ways: first , by propagating the error back along cortical connections to modify the prediction; second , by moving the body to generate the predicted sensations; and third , by changing how the brain attends to or samples incoming sensory input (Barrett 2015).
(1) is a form of integrated informational housekeeping. This functionality is non-representational, spilling outside the core definition of content. Moreover, it’s unclear how this house-keeping functionality could be evaluated for accuracy. The content carries a potential for producing new mental states as a result of (2) and (3) firing differently as a result of this information, but the representation is not manifested directly. The best defence for (1) as a form of representation might be this: accurate representation is that which most effectively leads the overall system to perform the kinds of functional tasks needed to achieve allostasis. A lack of housekeeping might lead to incoherence or worse, but it’s difficult to say that the housekeeping itself has anything to do with representing the stimulus. And it’s doubly difficult to see how this housekeeping fits into the standard definition for strong representationalism.
(2) describes action content, similar to (1) in category but entirely distinct in practice. Sensory data lead the brain to activate action-linked neuronal networks. This activity does not directly produce conscious perception. However, the activated motor circuits can themselves lead to further actions, leading to a perceptual experience. Moreover, there’s an implicit magnitude of effect based on prior arousal — context will affect representation and therefore painful experience. If there were earlier sense data that edged the system’s state toward activation potential, this action content might push pieces of the system to actuate.
Again, it’s difficult to see how this can be fit into strongly representational content. And this is not just because the idea that representations can be actions is already a strain on what it even means to represent something. These direct-action perceptions must be explained away by representationalists.
(3) outlines the counter-intuitive stimulus-response reversal at the heart of conscious perception in predictive coding models. Instead of sensory data merely flowing in and transforming into conscious, evaluable content instantiations, (3) asserts that a brain in motion uses this incoming information to update its predictions and models. Given a significant discrepancy between stimulus and state, this leads to a notable change in experience — but only if this information is considered sufficient and useful for maintaining allostasis (Barrett 2015).
Cashed out in traditional philosophical terms and put more ambitiously, in the EPIC model, representational interoceptive content could not be considered any kind of direct representation — instead, good content is merely useful input for the larger system’s predictions and plans. Phenomenal subjective experience is entirely separate from sensory input, strictly speaking. Again, this is not to say that changing sensory input does not lead to different experiences reliably being created, but it’s surprisingly difficult to find perceptions which can be considered directly caused by inward data flow, once you actually set up experiments to test this. For the sake of this discussion, however, I will constrain my claims and ambitions to those involving pain perception. Pain science has long held a strong distinction between nociception and painful experience. Everything coming out of the work on predictive coding just serves to pull them apart.
A Painful Event, in Scientific Terms
To sharpen this claim, I’ll walk through a mutilatedly simple example of what happens when I poke a subject — I’ll call her Dorothy — with a needle to her left little finger. I’ll do this using purely anatomical terms, adhering to the ISAP definition for pain (ISAP Task Force on Taxonomy, 1994).
First, as the needle pierces her skin, the nociceptors (pain neurons) in Dorothy’s finger release cytokines, beginning the causal chain reaction. There are two kinds of fibres which are activated here, the alpha-delta fibre (for sharp pains) and the c-fibre (for slow, throbbing pains). Both send electrical signals to the spinal column, which then hands these up to the brain through the spinothalamic tract. Then it’s into the thalamus, which relays the signals to the somatosensory cortex’s region for Dorothy’s left little finger. What happens next is a cause for contention, but in the EPIC model and other competing predictive models, Dorothy’s mind then notes this activity and uses it to generate the experience of pain. This includes the location, type, and suspected location of the event that threatens her continued allostasis. In the end, this may or may not lead to conscious Dorothy responding with complex behaviour to normalise her state — it depends on a great deal of contextual factors⁴.
Some things to note⁵:
- — The nociceptive signal is not a signal of the pain — features such as intensity, type, and duration are captured deeper in cognitive processing, while location is encoded in the somatosensory space the nociceptive signal occupies.
- — There are three types of nociceptive receptors and channels — thermal, chemical, and mechanical. The needle-poking event generally triggers just the mechanical receptors, but the experience of pain can often only infer the type based on contextual cues, assembled cognitively, after the fact.
- — Nociceptors will fire in response to a great deal of stimuli — in fact, they are continually misfiring. This means the majority of nociceptive activity has zero contribution to painful experience. What separates non-experience from experience when the signal is the same? In EPIC, prediction and allostasis.
- — Alpha-delta signals send stronger and quicker motor circuit signals. Touch receptors also fire but are processed separately to begin with. The mind (much) later compares these to nociceptive inputs, integrating them as needed.
- — Internal, visceral nociception and external-facing nociception both utilise the same channels but are often confused by subjects as they introspect the apparent location of that signal. But even with the confusion, the experience is just as real.
This is what happens physiologically, but as I am interested in the perception of pain and how it relates to representationalist accountings of pain, I’ll collate and reframe the above account in terms of content and representation.
The Representationalist Account
To a representationalist, nociception is pain. Moreover, the experience of pain in a particular location is evaluated on the basis of the content of a pain event. So, when I poke Dorothy’s finger and she experiences pain, representationalists will point to this as a successful, unbroken causal chain — painful content realised. And if Dorothy experienced a sharp pain in her finger in absence of an external cause, the pain is a hallucination. Or if the nociceptors fired due to something else, somewhere away from the origin of nociception — it doesn’t matter what or where — and Dorothy had a corresponding pain experience, the pain is illusory.
This explanation is lacking. As explored in the physical explanation of pain, the road between nociception and painful experience is full of potholes and outright chasms. Not only does most of the transference between firing nociceptors and yelling "ouch!" involve leaky transmissions and conversions — these themselves pose issues for representationalism — but nociception is just neurons firing, nothing more. The signal is not the experience. Painful experiences cannot be illusory or hallucinatory, because they stand for themselves (Ayede, 2017).
A weaker representationalist account will be more careful to not conflate nociception and experience. And it will readily accept the graininess and noisiness of the signal, the lost data packets and the bimodal input, the alpha-delta and c-fibres, each apparently representing different aspects of pain which sometimes coalesce into a conscious, unified pain experience. But even the most modern representationalist account cannot help but squirm when pressed to answer for the unbridgeable gap between nociceptive signal and painful experience.
One solution is to banish perceptual pain experience to the shadowy realm of non-representational interoception. On the surface, this is perfectly acceptable. Many reputable philosophers such as McGinn did this in the earlier days of representationalism:
Bodily sensations do not have an intentional object in the way perceptual experiences do. We distinguish between a visual experience and what it is an experience of; but we do not make this distinction in respect of pains. Or again, visual experiences represent the world as being a certain way, but pains have no such representational content (McGinn, 1991).
The aspiring representationalist is therefore free to continue explaining exteroceptive perceptual experiences as before, without worrying about the trickiness in pain and its interoceptive siblings. However, critical clarity and consistency is lost in the process. Why is nociception fundamentally different from vision and audition? All these involve physical disturbances triggering neuronal events. Sure, nociception suffers from the signal-experience discrepancy, but should this hairiness be sufficient cause to disqualify all interoceptive experiences from representationalism? This is heavy-handed and problematic.
Kind agrees and argues that this restricted version of representationalism would first have to show why we cannot explain other categories of phenomenal experiences with the same representational language (Kind, 2007). The burden is on restrictive representationalists to prove that this interoceptive system somehow breaks with the exteroceptive one in a meaningful way, as opposed to being merely completely inconvenient for their models.
Apparent properties of pain; location, severity, cause
Aside from this attempted restrictivist dodge, representationalists have other avenues for escape. To wit, they can claim that pain really is nociceptive representation by appealing to the apparent reality-laden properties of pain — location, severity, type, etc.. How is a non-representationalist to avoid the charge of murder when their fingerprints are all over the murder weapon? In this case, by refusing to allow their position to be straw-manned.
My view is not that pain has absolutely nothing to do with nociception — it’s that the experience of pain does not have a relationship with nociception. Instead, a predictive mind in motion takes nociception into consideration. If judged contextually significant, the neuronal firings have some of their intrinsic features extracted so we can build a usefully rich, painful experience from them — location, severity, and type included. Whether these features point to actual pain is subjectively interesting, as a continually misfiring sense of pain is obviously not in a subject’s interest. Generally, painful experience serves its function well enough to steer us away from self-destruction. Misfirings carry a cost but this cost is insufficiently steep to manifest itself through evolutionary selection.
What Predictive Coding gets Us
As is the case in any new framework, it’s unclear how predictive coding fundamentally alters the way we understand core issues like pain perception. Indeed, there are extremely nuanced contemporary versions of representationalism that would not miss a step in accommodating predictive coding or even embracing it. Yet there are a surprising number of hard-nosed strong representationalists at large, on both the conceptual and empirical ends — it therefore makes sense to assault this position. The strategy I employed here was to raise the possibility that some of the issues that plague their accounts of pain perception can be attributed to a fundamentally flawed neuroscientific approach to pain — the framing of pain as a purely stimulus-response process. And when neuroscience shifts, it’s only natural to expect the philosophical literature to shift accordingly.
My view is that moving from what we know for sure — that we feel pain and that we sometimes feel that the pain is somewhere and that this painful experience keeps us from wanton self-harm — this knowledge says something about pain having some representation. This is top-down representation, at least. Bottom-up pain representation is less clear. Nociception does not represent pain or cell damage, just itself. And pain perception represents potential cell damage to our action-capable self.
Perhaps it’s not so important to worry about directionality or the directness of representation and reference. Pain has obvious functional, evolutionarily driven utility. It holds features that point to meaningful actions which can lead to future non-painful experiences. However, this should not compel us to box it into simple stimulus-response representationalism. Should neuroscience continue to devour stimulus-response to feed predictive models, philosophers must be ready to capture the new world order with its probabilistic, stateful, contextual, subjective, backwards, fussy implications.
Ayede, M. (2017). The Routledge handbook of philosophy of pain. In J. Corns (Ed.). London; New York: Routledge, Taylor & Francis Group.
Barrett, L. F., & Simmons, W. K. (2015). Interoceptive predictions in the brain. Nature Reviews Neuroscience, 16(July 2015), 415–429.
Dretzke, F. (1996). Phenomenal Externalism or If Meanings Ain’t in the Head, Where Are Qualia? Philosophical Issues, 7, 143–158.
ISAP Task Force on Taxonomy. (1994). Classification of Chronic Pain, Second Edition. (H. Merskey, & N. Bogduk, Eds.) Seattle: ISAP Press.
Katz, S. (Ed.). (2001). Neuroscience: Exploring the Brain (Second ed.). Philadelphia, PA, USA: Lippincott Williams & Wilkins.
Kind, A. (2007, Jun). Restrictions on Representationalism. Philosophical Studies: An International Journal for Philosophy in the AnalyticTradition, 134(3), 405–427.
McGinn, C. (1991). Consciousness and Content. (N. Block, & et al., Eds.) The Nature of Consciousness, 295–307.
Watson, J. B. (1925). Behaviourism. New York, New York: W.W. Norton & Company, Inc.
: This isn’t to say that stimulus-response models weren’t open to experience coming from within a subject, just that they weren’t predictive of this. Contrast this with predictive coding, which expects experience to be generative, based on software state and inference, not direct stimulus.
: Allostasis: Homeostasis through physiological and/or behavioural change.
: Emotional pains clearly violate this, but they will be left on the sidelines here. This is not because representationalism doesn’t need to worry about them, but because they’re so worrisome they warrant a dedicated treatment.
: She could be sleeping deeply, be experiencing a migraine, or just deeply focused on another task. Or maybe she’s meditating and is completely reframing her painful experience as somehow external. It could even lead to her feeling pleasure in this case.
: Evidence and examples drawn from (Katz 2001).