Reading Summary: The Normative insignificance of neuroscience
Just read Selim Berker’s article The Normative insignificance of neuroscience. Such an ass-kicking fun read!
Berker first outlines Greene et al.’s empirical findings concerning the role of neuroimaging in the consequentialism-deontology debate, then points out a range of experimental and philosophical issues with their approach. The conclusion that Singer and Greene draw from the empirical findings is this: deontological intuitions are involve higher degrees of emotional processing, and should be discounted in favor of consequentialism.
The entire experiment is based on the dual-process hypothesis(kinda reminds me of Thinking, Fast and slow). In particular, Greene et al.’s basic theoretical setup is as follows:
- Footbridge-like cases (FL)→deontological judgments→based on personal factors→emotional
- Trolley-like case (TL)→consequentialist judgments→based on impersonal factors→cognitive
Two testable hypotheses follow from this setup:
- FL cases involve higher activity in brain areas associated with emotion, whereas TL cases involve higher activity in brain areas associated with rational info processing.
- The minority of people who think that we should throw the chubby guy to save five in FL cases should take longer to respond than their peers, since they have to use the rational parts of the brain to suppress the emotional parts.
Testing confirms both claims, but three empirical worries ensue. Firstly, posterior cingulate — an “emotional” area — lights up in both kinds of cases. This is not so bad; it could be said that a judgment is more or less emotion-dependent.
The second one is rather more disconcerting. In compiling response time (RT), they took the average of all uncharacteristic responses to FL cases, and compared to the average of all characteristic ones. (Presumably it was because the sample size didn’t allow individual evaluation of each FL case.) Since the average RT to each question varies (some might be more verbose than others), the lumping method does not yield meaningful comparison.
The last one concerns the very distinction between FL and TL cases. On Greene et al’s definition of personal, not all personal cases are FL and yield deontological intuitions (Kamm’s Lazy Susan case). So is is unclear that the basic experimental setup is valid.
On the philosophical side, it is clear to me that Greene et al. are using fundamentally consequentialist assumptions to argue for consequentialism (at based on Berker’s reading. I should read the original Nature paper some time soon). The “emotion bad, reasoning good” argument assumes that, well, emotion is bad! In particular it shouldn’t be trusted within the moral realm. It’s unclear though how neuroscience can prove this, without our having a prior conception of what is a morally right answer. If we did, thugh, there would be no dilemmas and we won’t have to do any of this. This requirement also underlies the argument from heuristics.
Most versions of consequentialism require impartial evaluation of the state of affairs tout court. So of course their proponents would try to steer clear of emotional influence, which can easily turn the evaluation’s aim into “best state of affairs for someone”. But if you don’t start out buying this assumption, then having some emotive basis in moral judgment is not a problem for you. And, as it turns out, even if you do subscribe to this assumption, it is not neuroscience that provides the validation. At best, neuro-imaging might establish that one set of judgments is more emotion-driven. From there, anything evaluative we say about emotion-driven judgments (that is it bad, that it should be excluded in ethical thinking, etc.) has nothing to do with fMRI.
The other bad argument is the one from evolution. It goes like this: because emotion-driven judgments are products of evolution, they should not count as much as the reason-driven ones. But as Berker points out, everything, including reason, are products of evolution. In the same way that our feeling of natural kinship does not make right racism or speciesism, our natural emotive intuition does not make wrong moral judgments. If we can’t go from is to ought, we probably can’t go from is to oughtn’t, either.
The last, and main, argument that Berker identifies, the one from morally irrelevant factors, can be seen as an elaboration of the “emotion bad” argument — it offers an explanation of why emotion is bad. Basically, it’s because emotion responds to personal factors, which are morally irrelevant. If that’s true, then it certainly seems that we should circumvent emotion in moral thinking. Can we thereby reject deontology? Well, recall the third methodological worry: since we haven’t really figured out what counts as personal case, we can’t really tell what kind of judgment counts as characteristically deontological. So it is not clear what we are rejecting here.
Berker’s most fatal strike has to do with (unsurprisingly) the significance of neuroscience in all of this. He asks us to grant everything to the experimental results — assume the statistics is valid, the distinctions are clear-cut, and the imaging data track perfectly “deontological” and “consequentialist judgments”. Does neuroscience successfully refute deontology?
Consider what neuro-imaging does. As an experimental tool, it serves to test certain hypotheses. The hypothesis actually tested here is this:
Assuming 1)deontological judgments arise more in cases involving personal factors such as direct bodily harm, and 2)that personal matters are more emotion-inducing, do characteristically deontological judgments involve more or less brain regions associated with emotional processing, as compared to characteristically consequentialist judgments?
On the most charitable reading we can answer this affirmatively using Greene’et al.’s data. Alright, so deontological judgments involve emotion regions, and from that we conclude that deontological judgment is more emotive. To refute deontology we just have to argue that emotive judgment is bad. The reason offered here is that emotion appeals to morally irrelevant factors, personal rather than impersonal factors.
So… deontology is wrong because only impersonal factors should matter in moral thinking. Hmm, we didn’t need to know anything about brain regions to pull that off: we only needed the consequentialist assumption of impartiality. So I suppose Berker’s conclusion can be stronger than he presents in this paper: not only did neuroscienfic evidence play no role, the structure of the argument itself, once we remove the fMRI stuff, is circular.