Normative Externalism — Introduction
This book defends normative externalism. This is the view that the most important norms concerning the guidance and evaluation of action and belief are external to the agent being guided or evaluated. The agent simply may not know what the salient norms are, and indeed may have seriously false beliefs about them. But this does not matter. What one should do, or should believe, in a particular situation is independent of what one thinks one should do or believe, and (in some key respects) of what one’s evidence suggests one should do or believe.
Normative externalism holds that normative beliefs, and normative evidence, have very little role in inquiry. In general, one’s evidence is relevant to what one should do. The normative externalist denies a natural generalisation of this little platitude. Although evidence about matters of fact is relevant to what one should do, evidence about the normative in general is not.
It’s worth starting by thinking through an example of where evidence is relevant to mundane action. A person, we’ll call him Baba, is looking for his car keys. He can remember leaving them in the drawer this morning, and has no reason to think they will have moved. So the natural thing to do is to look in the drawer. If he does this, however, he will be sadly disappointed, for his two year old daughter has moved the car keys into the cookie jar.
Things would go best for Baba if he looked in the cookie jar; that way he would find his car keys. But that would be a very odd thing for him to do. It would be unreasonable, and irrational, to look there. It wouldn’t make any sense. If he walked down the steps, walked straight to the cookie jar, and looked in it for his car keys, it would shock any onlookers because it would make no sense. It used to be thought that it would not shock his two year old daughter, since children that young had no sense that different people have different views on the world. But this isn’t true; well before age two children know that evidence predicts action, and are surprised by actions that don’t make sense given a person’s evidence (Zijing He, Matthias Bolz, and Baillargeon 2011). This is because from a very young age, humans expect other humans to act rationally (Scott and Baillargeon 2013).
In this example, Baba has a well-founded but false belief about a matter of fact: where the car keys are. Let’s compare this to a case where the false beliefs concern normative matters.
Gwenneg is at a conference, and is introduced to a new person. “Hi,” he says, “I’m Gwenneg,” and extends his hand to shake the stranger’s hand. The stranger replies, “Nice to meet you, but you probably shouldn’t shake my hand since I have disease D, and you can’t be too careful about infecting others.” At this point Gwenneg pulls out his gun and shoots the stranger dead.
Now let’s stipulate that Gwenneg has the following beliefs, the first of which is about a matter of fact, and the next three are about normative matters.
First, Gwenneg knows that disease D is so contagious, and so bad for humans both in terms of what it does to its victims’ quality and quantity of life, that the sudden death of a person with the disease will, on average, increase the number of quality-adjusted-life-years (QALYs) of the community. (QALYs are described in McKie et al. (1998), who go on to defend some philosophical theses concerning them that I’m about to assign to Gwenneg.) That is, although the sudden death of the person with the disease obviously decreases their QALYs remaining, to zero in fact, the death reduces everyone else’s risk of catching the disease so much that it increases the remaining QALYs in the community by a more than offsetting amount.
Second, Gwenneg believes in a strong version of the ‘straight rule’. The straight rule says that given the knowledge that x% of the Fs are Gs, other things equal it is reasonable to have credence that this particular F is a G. Basically everyone believes in some version of the straight rule, and basically everyone thinks that it needs to be qualified in certain circumstances. When I say that Gwenneg believes in a strong version of it, I mean that he thinks that it takes quite a bit of additional information to block the the transition from believing x% of the Fs are Gs to having credence that this particular F is a G. Nick Bostrom (2003) endorses, and uses to interesting effect, what I’m calling a strong version of the straight rule. In my reply to his paper I argue that only a weak version is plausible, since other things are rarely equal (Weatherson 2003a). Gwenneg thinks that Bostrom has the better of that debate.
Third, Gwenneg thinks that QALYs are a good measure of welfare. So the most beneficent action, the one that is best for well-being, is the one that maximises QALYs. This is hardly an uncontroversial view, but it does have some prominent defenders (McKie et al. 1998).
And fourth, Gwenneg endorses a welfarist version of Frank Jackson’s decision-theoretic consequentialism (Jackson 1991). That is, Gwenneg thinks the right thing to do is the thing that maximises expected welfare.
Putting these four beliefs together, we can see why Gwenneg shot the stranger. He believed that, on average, the sudden death of someone suffering from disease D increases the QALYs remaining in the community. By the straight rule, he inferred that each particular death of someone suffering from disease D increases the expected QALYs remaining in the community. By the equation of QALYs with welfare he inferred that each particular death of someone suffering from disease D increases the expected welfare of the community. And by his welfarist consequentialism, he inferred that bringing about such a death is a good thing to do. So not only do these beliefs make his action make sense, they appear to make it the case that doing other than he did would be a moral failing.
Now, I think the second, third and fourth beliefs I’ve attributed to Gwenneg are false. The first is a stipulated fact about the world of Gwenneg’s story. It is a fairly extreme claim, but far from fantastic. There are probably diseases in reality that are like disease D in this respect. So we’ll assume he hasn’t made a mistake there, but from then on every single step is wrong. But none of these steps are utterly crazy. It is not too hard both ordinary reasonable folk who endorse each individual step, and careful argumentation in professional journals in support of those steps. Indeed, I have cited just such argumentation. Let’s assume that Gwenneg is familiar with those arguments, so he has reason to hold each of his beliefs. In fact, and here you might worry that the story I’m telling loses some coherence, let’s assume that Gwenneg’s exposure to philosophical evidence has been so tilted that he has only seen the arguments for the views he holds, and not any good arguments against them. So not only does he have these views, but in each case he is holding the view that is best supported by the (philosophical) evidence available.
Now, I also suspect most readers will agree that Gwenneg has gone wrong somewhere, and shooting the stranger was a horribly wrong thing to do. Normative externalism itself is silent on whether that is right; it’s a second-order theory and leaves first-order questions about the straight-rule, consequentialism etc to one side. But it seems safe to assume that it really is wrong for Gwenneg to act as he did. The question, then, is whether Gwenneg’s sincere, and evidence-backed, normative beliefs change our evaluation of either his action, or of him.
The normative externalist says that they do not. Gwenneg is an example of what Nomy Arpaly (2003) calls misguided conscience. Although he does what his conscience directs, he is wrong, for his conscience is a terrible guide. And in these cases, the normative externalist says that acting in accord with one’s conscience does not redeem one’s wrong actions, even if one’s conscience is sensitive to one’s philosophical evidence. The normative internalist, in this story, says that Gwenneg’s conscience is in some way philosophically relevant.
Perhaps it makes Gwenneg’s actions reasonable, or rational, or sensible, in the same way that it is reasonable, rational and sensible for Baba to look for his keys in the drawer rather than the cookie jar. That is what would be true if we treated normative ignorance the same way that we treat factual ignorance. When one is wrong about a matter of fact, it is reasonable, rational and sensible to act in accord with one’s evidence, even if this leads to sub-optimal outcomes. We could easily imagine having the same attitude towards people who are wrong about a normative matter. And if we do, we have to say that Gwenneg is reasonable, rational and sensible to shoot the stranger.
Or perhaps Gwenneg’s sincere beliefs should not change our evaluation of his action, but should change our evaluation of him. Perhaps we should say that although his actions are still wrong, he is to be excused for the wrongdoing, in virtue of his sincere, reasonable belief in the rightness of his action. This version of internalism won’t be particularly central to the themes of this book, but I will discuss it at some length in chapter 5.
Now I could try here to use the example of Gwenneg to get a quick and crushing victory over the normative internalist. After all, I could argue that any theory that says anything positive about Gwenneg is clearly false, by the light of our clear epistemic and moral intuitions. But I don’t think that victory will be quite so swift. For one thing, there are other cases where the internalist appears to have a much more intuitive position than the externalist, and any comparison of the theories in terms of their fidelity to raw intuition would have to look at those cases too. For another, there are things the normative internalist could say about Gwenneg. Most internalists I know of prefer to start with the credences, rather than the beliefs of agents like Gwenneg. And it is plausible (not obvious, but plausible) that to make the case work, I would have to say that Gwenneg’s credences in each of his three philosophical views was incredibly high. And it’s plausible (again, not obvious but plausible) that this makes the case so outlandish that it is either literally incoherent, or so incredible we can’t have reliable intuitions about it. Rather than try and prosecute this case, I’m going to step back and look at another way in which agents can be normatively ignorant.
Varieties of Normative Ignorance
In this book, I’m going to look at four kinds of normative ignorance. Gwenneg exemplifies the first three of these, and in this section I’ll spend a bit of time explaining the significance of the fourth.
Some people are epistemologically ignorant. The kind of epistemological ignorance we’ll be considering most here concerns ignorance about which things are made likely by one’s evidence. This could come about because the agent doesn’t know what their evidence is. Or it could be because they don’t know the answer to some hard questions about the relationship between evidence and theory. Imagine someone who transferred between universities after one year. At her first university, the statisticians were resolutely classical, and at the second they are resolutely Bayesian. She is a good student, and she can understand why each rejects the other’s view. And she is perfectly good at using the tools each set of professors taught her. But she isn’t such a good theorist as to be able to tell for sure which set of statisticians is correct. (One may doubt that any human is that good a theorist.) Now she finds herself confronted with a data set that one group of professors say is good evidence for a hypothesis p, and the other group says is rather weak evidence. What should she believe? The normative externalist says that what matters is which of her professors are correct; the internalist thinks that the evidence she has, or perhaps the beliefs she has, about the correctness of her professors is important.
Some people are morally ignorant. In the recent literature, when people write about moral ignorance, they often mean agents who have firmly held but false moral beliefs. In chapter 5, I’ll join this debate. But for now our focus is more generally on agents who fail to know the relevant moral truths. As in the last paragraph, it helps to imagine someone who is familiar with a debate between leading ethicists over some issue, but lacks the ability to conclusively resolve the debate, and the confidence to settle on an opinion without such conclusive resolution. She then is faced with a real-life situation that resembles the issue under debate between the leading ethicists. What should she do? The normative externalist says that it depends on which of the ethicists are right; the internalist thinks that her evidence, or perhaps her beliefs, about hard questions in ethics matters. These two kinds of ignorance, epistemological and ethical, will be the organising themes of the book. But there are two other kinds of ignorance that will be relevant to our story.
Some people are ignorant about what makes for human welfare. Again, this is an area where philosophers, and other theorists, differ widely in their published views. Some theorists think that welfare is a matter of having the right kinds of emotional states, such as happiness, pleasure or satisfaction. Some think it is a matter of having preferences that are satisfied, or perhaps having preferences of some special kind that are satisfied. And others think that there are a plurality of things that make for welfare, including perhaps health, knowledge and friendships. (Roger Crisp (2013, sec. 4) provides a useful survey of these views.) We can imagine someone who doesn’t know which of these theories is correct, but is trying to make a decision where the welfare-maximising choice will be different depending on which of them is correct. What should she do? Again, the issue relevant to this book is whether all that matters is which theory of welfare is actually correct, or whether the agent’s beliefs and evidence about philosophical theories matter.
Finally, we can imagine someone who is ignorant of the correct approach to decision making. (This isn’t a problem for Gwenneg. Or, at least, we didn’t make it a problem in the telling of the story.) Here, unlike perhaps in the previous three categories, there is an orthodoxy. It is that it is best to maximise expected value. But this orthodoxy is not beyond question. Lara Buchak (2013)has a book length defence of a rival view. Since Buchak’s view is less well known than the competing views I’ll spend a little more time on it, and how it might affect what decisions people make.
Imagine that Llinos is making trying to decide how much to value a bet with the following payoffs: it returns £10 with probability 0.6, £13 with probability 0.3, and £15 with probability 0.1. Assume that for the sums involved, each pound is worth as much to Llinos as the next. Now the normal way to think about how much this bet is worth to Llinos is to multiply each of the possible outcomes by the probability of that outcome, and sum the results. So this bet is worth 10 × 0.6 + 13 × 0.3 + 15 × 0.1 = 6 + 3.9 + 1.5 = 11.4. This is what is called the expected return of the bet. But there’s another way to get to the same result. Order each of the possible outcomes from worst to best, and at each step, multiply the probability of getting at least that much by the difference between that amount and the previous step. (At the first step, the ‘previous’ value is 0.) So Llinos gets £10 with probability 1, has an 0.4 chance of getting another £3, and has an 0.1 chance of getting another £2. Applying the above rule, we work out her expected return is 10 + 0.4 × 3 + 0.1 × 2 = 10 + 1.2 + 0.2 = 11.4. It isn’t coincidence that we got the same result each way; these are just two ways of working out the same sum. But the latter approach makes it easier to understand Buchak’s distinctive view.
She thinks that the standard approach, the one I’ve been setting out so far, is appropriate only for agents who are neutral with respect to risk. Agents who are risk seeking, or risk averse, should use slightly different methods. In particular, when we multiplied each possible gain by the probability of getting that gain, Buchak thinks we should instead multiply by some function f of the probability. If the agent is risk averse, then f(x) < x. To use one of Buchak’s standard examples, a seriously risk averse agent might set f(x) = x2. (Remember that x ∈ [0, 1], so f(x) < x everywhere except the extremes.) If we assume that this is Llinos’s risk function, the bet I described above will have value 10 + 0.42 × 3 + 0.12 × 2 = 10 + 0.48 + 0.02 = 10.5.
Now imagine a case that is simpler in one respect, and more complicated in another. Iolana has to choose between getting £1 for sure, and getting £3 iff a known to be fair coin lands heads. (The marginal utility of money to Iolana is also constant over the range in question.) And she doesn’t know whether she should use standard decision theory, or a version of Buchak’s decision theory, with the risk function set at f(x) = x2. Either way, the £1 is worth 1. (I’m assuming that £1 is worth 1 util, expressing values of choices in utils, and not using any abbreviation for these utils.) On standard theory, the bet is worth 0.5 × 3 = 1.5. On Buchak’s theory, it is worth 0.52 × 3 = 0.75. So until she knows which decision theory to use, she won’t know which option is best to take. That’s not that she won’t know which option will return the most. She can’t know that until the coin is flipped. It’s that she won’t know which bet is rational to take, given her knowledge about the setup, until she has the right theory of rational decision making.
In the spirit of normative internalism, we might imagine we could solve this problem for Iolana without resolving the dispute between Buchak and her orthodox rivals. Assume that Iolana has, quite rationally, credence 0.5 that Buchak’s theory is correct, and credence 0.5 that orthodox theory is correct. (I’m assuming here that a rational agent could have positive credence in Buchak’s views. But that’s clearly true, since Buchak herself is rational!) Then the bet on the coin has, in some sense, 0.5 chance of being worth 1.5, and 0.5 chance of being worth 0.75. Now we could ask ourselves, is it better to take the £1 for sure, or to take the bet that has, in some sense, 0.5 chance of being worth 1.5, and 0.5 chance of being worth 0.75?
The problem is that we need a theory of decision to answer that very question. If Iolana takes the bet, she is guaranteed to get a bet worth at least 0.75, and she has, by her lights, an 0.5 chance of getting a bet worth another 0.75. (That 0.75 is the difference between the 1.5 the bet is worth if orthodox theory is true, and the 0.75 it is worth if Buchak’s theory is true.) And, by orthodox lights, that is worth 0.75 + 0.5 × 0.75 = 1.125. But by Buchak’s lights, that is worth 0.75 + 0.52 × 0.75 = 0.9375. We still don’t know whether the bet is worth more or less than the sure £1.
Over the course of this book, we’ll see a lot of theorists who argue that in one way or other, we can resolve practical normative questions like the one Iolana faces without actually resolving the hard theoretical issues that make the practical questions difficult. And one common way to think this can be done traces back to an intriguing suggestion by Robert Nozick (1994). Nozick suggested we could use something like the procedure I described in the previous paragraph. Treat making a choice under normative uncertainty as taking a kind of bet, where the odds are the probabilities of each of the relevant normative theories, and the payoffs are the values of the choice given the normative theory. And the point to note so far is that this won’t actually be a technique for resolving practical problems without a theory of decision making. At some level, we simply need a theory of decision.
We can restate the issue here by taking a small detour into work on deontic modals, words like ‘ought’ and ‘should’. (These words aren’t strictly synonymous, but the differences between them won’t matter for the points I’m making.) One of the striking things about these words is that their correct application is very sensitive to which facts are taken as given. Often these facts are fixed by context, but we can also stipulate which facts we are assuming. To see this in action, imagine that Baba has found his car keys, driven to the shops, and is trying to buy sunscreen for his daughter. He has to choose between three brands: Active, Badger and Cape. And here are the relevant facts about the choice.
- Badger costs £12, the other two cost £10, and Baba prefers spending less money to spending more.
- Each sunscreen is equally effective at preventing sunburn and skin disease from excessive exposure to the sun.
- Baba knows that his daughter is not allergic to Badger, and that she is allergic to one of Active and Cape, but he does know which one, and his evidence is indifferent between the two.
- In fact, she is allergic to Cape.
- But Baba believes, for no good reason, she is allergic to Active.
Now I think we can understand the word ‘ought’ in each of the following sentences so that they turn out to be true.
- Given all the facts, including the fact that his daughter is allergic to Cape and not Active, Baba ought buy Active.
- Given the evidence available, Baba ought to buy Badger, since it isn’t worth risking an allergic reaction to save £2.
- Given his beliefs, including his belief that his daughter is allergic to Active and not Cape, Baba ought to buy Cape.
Sometimes the ‘ought’ in 1 is called the objective ought, since it tracks what is best given the objective state of the world, and the ‘ought’ in 3 is called the subjective ought, since it tracks what is best given the beliefs of the agent. Neither term is particularly happy. This terminology suggests that these two are particularly important for evaluation, yet this case that they aren’t. It suggests that there can be cases where the right thing to do is something one knows to not be objectively best. Baba is wrong about Cape, but he knows that the best outcome won’t be from buying Badger. Yet that is what he should do. And the ‘objective’/‘subjective’ terminology suggests that we have two (or more) different words here, rather than one context-sensitive term. (Imagine a philosopher saying that English had several ’nearby’s, a driving nearby for things that are within easy driving distance, a walking nearby for things that are within easy walking distance, and so on.) So let’s just look at the different ways that the English term ‘ought’ can be bounced around by context.
I think we can make sense of a use of ‘ought’ within a sentential context where it is clear we are assuming a philosophical theory that we otherwise take to be false. So both defenders of decision-theoretic orthodoxy, and believers in Buchak’s heterodox view, can accept that 4 and 5 are correct.
4. Given orthodox decision theory, Iolana ought to take the bet.
5. Given Buchak’s theory, Iolana ought to decline the bet.
Believers in orthodoxy think that ordinarily, talk of what someone ought to do is talk about what maximises expected value. But they can still make sense of the ‘ought’ in 5. A fact that is usually taken to be fixed as a background fact when evaluating a claim involving ‘ought’, the fact that decisions with higher expected value are better than decisions with lower expected value, is removed from the background set of assumptions, and replaced with a different proposition about the truth of Buchak’s decision theory. It is somewhat striking that the English word is this flexible.
We can perhaps find even more flexibility in the word by complicating the example a little further. Wikolia is like Iolana is almost every respect. She gives equal credence to orthodox decision theory and Buchak’s alternative, and no credence to any other alternative, and she is facing a choice between £1 for sure, and £3 iff a fair coin lands heads. But she has a third choice: 55 pence for sure, plus another £1.60 iff the coin lands heads. It might be easiest to label her options A, B and C, with A being the sure pound, B being the bet Iolana is considering, and C the new choice. Then her payoffs, given each choice and the outcome of the coin toss, are as follows.
The expected value of Option C is 0.55 + 0.5 × 1.6 = 1.35. (I’m still assuming that £1 is worth 1 util, and expressing values of choices in utils.) It’s value on Buchak’s theory is 0.55 + 0.52 × 1.6 = 0.95. Let’s add those facts to the table, using EV for expected value, and BV for value according to Buchak’s theory.
Now rememeber that Wikolia is unsure which of these decision theories to use, and gives each of them equal credence. And, as above, whether we use orthodox theory or Buchak’s alternative at this second level affects how we might incorporate this fact into an evaluation of the options. So let EV2 be the expected value of each option if it is construed as a bet with an 0.5 chance of returning its expected value, and an 0.5 chance of returning its value on Buchak’s theory, and BV2 the value of that same bet on Buchak’s theory.
And now something interesting happens. In each of the last two columns, Option C ranks highest. So arguably, Wikolia can reason as follows:Whichever theory I use at the second order, option C is best. So I should take option C. So we can make sense of each of the following claims.
6. Given orthodox decision theory, Wikolia ought to take Option A.
7. Given Buchak’s decision theory, Wikolia ought to take Option B.
8. Given her credences over decision theories, Wikolia ought to take option C.
I’m not completely convinced that 8 is really a coherent thing to say, but let’s work on the assumption that it is. The normative internalist, as I’m imagining them, thinks that 8 is both true and important. There is some important sense in which Option C is the right thing for Wikolia to do. It is interesting, but not objectionable, that Option C is not the best thing to do according to either decision theory that Wikolia takes seriously. After all, buying Badger subscreen isn’t the best thing to do according either possibility for which allergy his daughter has seriously, yet it is the thing for Baba to do.
What is more striking is how many assumptions we had to make in order to get a sentence like 8 to turn out true. We obviously had to give Wikolia an option that Iolana didn’t have. In the sense in which Wikolia ought to take Option C, it is completely unclear what Iolana ought to do. And we had to rule out, tacitly, every possible decision theory other than the two I’ve been discussing. Without that ruling out, there is no obvious path from the premise that Option C is best according to two decision theories to the conclusion that it is best overall.
More generally, it is striking that neither of the following two sentences is true, whatever we think about first-order decision theory.
9. Given the correct decision theory, Wikolia ought to take Option C.
10. Setting aside everything we know about the normative, Wikolia ought to take Option C.
We know 9 is false, because the correct theory either says that Wikolia ought to take A (if orthodox theory is correct), or says that Wikolia ought to take B (if Buchak is correct). Neither way will it say that she ought to take C. And we know 10 is false, because without any normative knowledge, we can’t say anything whatsoever about what should be done. If 8 is true, it has to be true relative to an interpretation of ‘ought’ that takes as background neither none nor all of the normative facts. The short version of what is to follow in the rest of this book is that I’m going to argue any such interpretation is either unmotivated, incoherent, or unattractive.
I’m not going to talk much about what people ought to do or ought to believe in this book, instead preferring to talk about the goodness, rightness and rationality or their actions and beliefs. To be sure, I did set things up that way in the very first paragraph of the introduction, but it’s now time to kick away the ladder. That’s largely because debates about ought-claims, like 8, can easily end up getting two somewhat distinct issues entangled. One issue is whether sentences like 8 have a true reading. The other is whether that reading is important to ethics, epistemology or decision theory. I’m most interested in the second question. Perhaps there is a sense in which, setting aside the things that determine what she really ought to do, Wikolia ought to take option C. What I want to deny is the philosophical interest of this. For almost anything you like, we can make a set of assumptions such that given those assumptions, that is the thing a person ought to do. But from that nothing follows about what would be right or good to do. Nor does anything follow about what they ought to do in the sense of ‘ought’ “most immediately relevant to action [which is] the primary business of ethical theory to deliver” (Jackson 1991, 472) . It’s rarely possible in philosophy to completely separate linguistic issues from moral and epistemological ones; but it is better to do what one can. And here that means leaving aside tricky questions about the semantcs of ‘ought’.
Three Arguments for Normative Externalism
Normative externalism is a negative thesis; it says that certain kinds of evidence and belief do not matter to the goodness, rightness and rationality of beliefs and actions. So it is natural that the arguments for it will also be somewhat negative. There are three themes running through this book as to why those kinds of evidence and belief do not matter.
The first theme is one that I’ve introduced at some length already. Assume that a person’s normative evidence, that is, their evidence about the correct normative theory, is relevant to evaluating their belief or action. We still need some theory about how someone should belief and act, given some normative evidence. We need, that is, a second-order theory about how to believe and act when one doesn’t know how to believe and act. And that second-order theory may be unknown to the person being evaluated, and may even be somewhat improbable given their evidence.
Following work by Miriam Schoenfield (2014a), I’ll argue that the normative internalist faces a dilemma here. Either there is an epistemic constraint on the correctness of a second-order theory or there is not. For concreteness, let’s frame that constraint in terms of knowledge, though we could frame it in other ways too. So our starting point is that it either is, or is not, possible for the second-order theory to be correct, in the sense that it is appropriate to use it to evaluate an agent, while the agent does not know it is true. If this is possible, the motivation for internalism collapses. For if it can be appropriate to use a theory to evaluate an agent while the agent does not know that theory is true, we may as well use the correct moral or epistemic theory, instead of some second-order theory. If this is not possible, then internalism becomes implausible, for any plausible second-order theory will be such that an agent could have excellent reason to believe that it is not true.
The second theme is that normative internalism suggests that ideal agents have rather strange looking motivations. In the moral case, I’ll argue, someone could only do what the normative internalist wants them to do if they were motivated to do the right thing as such. And that, I think, is not an attractive character. I don’t mean to here be denying the platitude that good people want good things. What I mean to be saying, following work by Michael Smith (1994), Nomy Arpaly (2003) and Julia Markovits (2010; Markovits 2012) is that good people want the things that are good, not goodness itself. The good person is motivated to rescue the drowning child because there is a child who is in danger, not because it would be good, or virtuous, or heroic, to rescue the child. Similarly in epistemology, the normative internalist thinks that the wise person is one who prefers rational beliefs to true ones, and this seems similarly mistaken.
The third theme is that normative internalism implies some very strange things about some very familiar cases. I already discussed in some detail a case of what Arpaly calls misguided conscience. But I think more striking is what internalists have to say about what Arpaly calls inadvertent virtue. These are cases where someone does the right thing, for the right reasons, although they mistakenly believe that they are doing the wrong thing. There is something quite praiseworthy about such people, and such actions. They are not inexplicable, in the way that Baba going straight to the cookie jar to get his car keys would be inexplicable. But the most natural forms of internalism say that these actions are not praiseworthy, and are just as inexplicable as Baba going straight to the cookie jar. In epistemology, internalists say strange things about a class of cases that have not received sufficient attention. These are cases where the agent has misleading evidence that rationality requires her to be more confident in a target proposition than she currently (and rationally) is, and therefore she is rationally required to take more aggressive, less cautious actions. When we flesh out the details of these cases, we’ll see that this is a very strange reason for thinking one should believe and act.
Three Arguments for Internalism
Given how one-sided the presentation has been so far, you may, dear reader, be doubting your judgment in picking up a book about the debate between normative externalism and normative internalism. It doesn’t seem like much of a fight! But there have been a number of important reasons put forward to defend varieties of normative internalism.
Deòrsa is trying to decide what to have for dinner, hamburger or tofu. He knows that there is nothing morally wrong about having tofu. He is confident, but not at all certain, that there is nothing morally wrong with having hamburger. But he also thinks that if having hamburger is wrong, it is seriously wrong. In fact, in the world of this little story, there isn’t anything morally wrong in having hamburger. Still, Deòrsa runs a kind of moral risk in having the hamburger. It’s similar, in a way, to the risk someone runs in driving dangerously, even if no harm comes of it.
Arguably, it is immoral to run risks in this way. That is, there’s a plausible case that there is a wrong of moral recklessness, and Deòrsa, like all meat-eaters, commits it every time he eats meat. But it’s also plausible that the normative externalist cannot account for the wrong of moral recklessness. So normative externalism is false.
That argument is far from watertight, but ultimately I accept a large part of it. I think there’s no way for a normative externalist to account for the wrongfulness of moral recklessness. And that is, I think, a serious intuitive cost to normative externalism. But it’s not a cost that should worry us, for there is an argument that there ultimately is no such wrong as moral recklessness. I’ll develop such an argument in chapter 3. The core idea is that if there were a wrong of moral recklessness, moral agents would be required to aim at the good as such. But moral agents are not required to aim at the good as such, only to the things that are in fact good. So there is no wrong of moral recklessness.
We’ve already seen some cases where the intuitive thing to say is what the externalist says. Internalists have not, as a rule, proferred cases that have this feature. We’ve already seen one case where it is easy to have internalist intuitions: Deòrsa the nervous carnivore. Here is another.
Tatjana is a doctor. She has been on a very long shift, although she doesn’t feel tired yet. She sees a patient, and infers from his symptoms that the patient has a disease, call it E, for which the cure is drug X. If she’s wrong about the diagnosis, then giving the patient drug X could be very damaging. She’s good at her job, and in fact she’s right about this diagnosis — the patient does have E and Tatjana thinks that because she’s correctly identified the symptoms. But Tatjana knows that doctors who have been on duty as long as she has are often over-confident in their judgment. And while the patient will suffer somewhat while she gets someone to double check her diagnosis, she decides to get a second opinion before administering drug X. (This case is modelled on one offered by David Christensen (2010a, 186).)
It’s natural in this case to reason as follows.
- Tatjana is right to get a second opinion before administering drug X, and it would be wrong to administer the drug without getting a second opinion.
- If Tatjana could rationally believe that the patient has E, it would not be wrong to administer drug X.
- So Tatjana cannot rationally believe the patient has E. (From 1, 2)
- But given the evidence, a diagnosis of E is reasonable, since it is supported by the objective evidence.
- So in Tatjana’s case, the reasonable and the rational come apart, just as the internalist says and the externalist denies. (From 3, 4)
I’m going to argue that premise 4 is wrong, and that it rests on a mistaken notion of evidence. Building up to this argument will take some time in chapter 6, but we can get a quick sense of what’s going on in cases like this one by considering a variant on David Christensen’s much discussed restaurant case. (Christensen 2009, 757)
Deòrsa developed a sudden interest in discovering how much he’d paid in electricity bills over the past two years. So he found the last twenty-four bills he had, wrote down the total amount of each bill in a column, and added the numbers by hand. By a minor miracle, he performed all the transcriptions and steps of the addition correctly, so he correctly wrote at the bottom of the page that he had paid $2345.67 for electricity over that period. (Let q be the proposition that Deòrsa paid $2345.67 for electricity over the last 24 months.) But Deòrsa knew that he was extremely unreliable at this kind of activity. What should Deòrsa believe?
It’s very intuitive that Deòrsa should not be very confident that he had paid $2345.67 for electricity over the past two years. And that’s so even though Deòrsa’s evidence entails that he had paid just that much. I’m going to argue that this isn’t actually a problem for the externalist. The only way to make this into a problem for the externalist is to assume a very close connection between entailment and evidential support, and there are good reasons to keep those two notions separate. I’ll have much more to say about both the cases that have been thought to tell in favour of internalism, and about the methodology of cases, in what follows.
The externalist offers a fairly simple piece of advice to people facing a moral challenge: Do the right thing. But as a general piece of advice, Do the right thing might sound not much more helpful than Buy low, sell high. We need, it might be thought, more helpful advice. Put more carefully, we can get the following argument for internalism.
- Our most important norms should be sources of usable advice.
- If normative externalism is true, our norms are not sources of usable advice.
- If normative internalism is true, our norms are sources of usable advice.
- So normative externalism is false, and we have a reason to believe normative internalism is true.
Note that I’m not here assuming that normative externalism and normative internalism are contradictories; there are positions that might best be classified as falling into neither camp. If they were contradictories, the conclusion of this argument would be highly redundant.
It isn’t always clear what it is for advice to be usable. If we have rather generous standards for what counts as a usable norm, then premise 2 of the argument is false. After all, we can often tell what is the right thing to do. If we have rather strict standards, then premise 1 is false, since it amounts to the claim that the most important norms must be luminous. (A norm is luminous if whenever it applies, it is possible to know that it applies.) But Timothy Williamson (2000) has shown that nothing interesting is luminous, and our most important norms are interesting. I suspect that there is no reading of ‘usable’ that makes both premises 1 and 2 true.
More strongly, I’m very confident that there is no reading of ‘usable’ that makes both premises 2 and 3 true for any plausible form of internalism. The most plausible forms of internalism all make it difficult to say in some cases what is what one should do, even by internalist standards. The usability criterion threatens everyone. This is a consequence of the regress argument I sketched in the previous section. And the development of that regress argument in more detail throughout the book will be my primary response to this argument. Everyone except the most radical subjectivist will, I argue, be forced to acknowledge standards for evaluating agents that those very agents are not in a position to accept, and hence be forced to acknowledge norms that could not play the kind of guidance role the internalist hopes for them to play. And since the most radical forms of subjectivism are clearly false, the connection between evaluation and guidance must be more tenuous than the internalist assumes.
Three Dimensions of Debate
Ethics or Epistemology
I’ve been moving back and forth fairly freely between talking about ethics and talking about epistemology, and hence using the labels ‘normative externalism’ and ‘normative internalism’ as labels for positions in both ethics and epistemology. But one might split the two views up, holding what I’ve called externalist views in epistemology and internalist views in ethics, or vice versa. When I started thinking about these matters, I thought that such positions would be completely unmotivated. I’ve now realised that was a mistake. I’m a committed externalist, so I don’t think the motivations for these mixed views I’ll give work, but I do think there are motivations to be found.
It will help to outline the positions to have three characters to consider. So I’ll set out the positions I have in mind by considering how they think about these three characters:
- Durga is an example of inadvertant virtue. She has good reason to believe that doing X is wrong, and she does believe that doing X is wrong, but in fact X is an excellent thing to do, and she does X.
- Kaylyn is an example of misguided conscience. Like Durga, She has good reason to believe that doing X is wrong, and she does believe that doing X is wrong, but in fact X is an excellent thing to do, and it is regrettable that she does Y instead of X.
- Guinevere has excellent reason to believe that doing Y is wrong, and doing X is right. But she enjoys doing Y, so she convinces herself it is right, and does it.
You can fill in for yourself examples of what X and Y might be, but what matters for now is the schematic form of the characters, not any particular example.
For the first ‘mixed’ position, we might think that excuses play a different role in ethics to what they play in epistemology. So consider the theorist who agrees with the externalist that Durga is praiseworthy, but disagrees about Kaylyn; this theorist thinks Kaylyn’s actions are excusable in virtue of her sincere moral ignorance. It isn’t clear that such a theorist need disagree at all with the normative externalist in epistemology. I will discuss this kind of position, which has been endorsed by several writers, in section 5.3.
To motivate the converse position — normative externalism about epistemology, with normative internalism in ethics — consider Guinevere. A normative internalist who said that the most important thing is to do what one believes is moral should say that Guinevere is, in the most important sense, doing what she should. Indeed, it would in the most important sense be wrong for Guinevere to do anything else. I assume that many readers will not be particularly attracted to such a view, and might look for a way out.
The obvious internalist-friendly way out is to say that agents should do what their evidence implies is good, and say that Guinevere acts wrongly because she doesn’t do that. And it is important to import a little normative externalism here. If Guinevere’s motivated reasoning is comprehensive enough, and we may well assume it is, then she will believe that her evidence supports the view that obedience to the state is right. We have to say that it matters that she is in fact wrong, and her evidence does not support this moral belief, else we still won’t have grounds to criticise her. That is, we only get to criticise Guinevere if we adopt normative externalism in ethics. So that could be a reason to be a normative externalist in epistemology, and a normative internalist in ethics.
So we shouldn’t assume that arguments for, or against, normative externalism in one domain will easily carry across to the other. There are a lot of positions around here that satisfy at least minimal coherence.
Supplement or Replace
A natural way to motivate normative internalism is to argue that the externalist has simply missed a large class of norms. To use some natural terminology, the externalist has focussed on objective norms, and ignored the importance of subjective norms. Now I think this is all wrong — externalist-friendly norms are subjective in any sense that subjective norms are needed. And as I argued above, it seems that the objective/subjective distinction obscures more than it enlightens. But assume it’s right, that the externalist is simply blind to an important class of norms, and there are these extra subjective norms on top of the externalist’s objective norms. (I’ll use this assumption for the rest of this subsection.) How should we relate all these norms to agents?
A natural approach is to say that they all apply to every agent. This leads to a not completely crazy set of judgments about Durga and Kaylyn. Both of them are doing something wrong. Durga is being a hypocrite, doing something she thinks is wrong. And Kaylyn is being unjust, doing something that is actually wrong. This suggests that Durga and Kaylyn are in a moral dilemma. It is, one might think, an asymmetric moral dilemma. It is worse to do what Kaylyn does than what Durga does. But that isn’t something that we should think is problematic, unless we confuse what PhillipaFoot (1983) calls ‘type 1’ and ‘type 2’ ought propositions. So it seems to me that this is a coherent thing to say about the case. I think it’s false — Durga doesn’t do anything wrong — but it’s coherent.
The previous paragraph (up to the last sentence) sets out a kind of normative internalism where subjective norms supplement objective norms. This is a view where moral dilemmas will be easy to come by; they arise any time the agent is misled about morality. Some may take that to be a serious cost of the view, thinking that moral dilemmas should be rare. I don’t think that’s a good objection to the view; it is a platitude that doing well is hard, and we should be open to the possibility that doing well in every respect is frequently impossible. I think the view falls to the arguments of chapters 3 and 4, which are arguments that no coherent sense can be given to the notion of subjective (in this sense) norms, but I don’t dismiss it on the basis of considerations about dilemmas.
A similar view has been introduced into epistemology by David Christensen (2010b). He thinks that Chantrea is in a dilemma. She should believe two plus two is four, since that’s supported by her evidence, and she should not believe it, since she shouldn’t believe she has good evidence for it. Whatever Chantrea does, she will violate some norm she should not violate. This too is a view where subjective norms supplement objective norms.
The internalist alternative is to say that subjective norms replace objective norms. On this view, there is a right thing for Kaylyn and Durga to do, namely Y. And there is a right attitude for Chantrea to have, namely to be uncertain about whether two plus two is four. To motivate this kind of view, think about what a decision-theoretic consequentialist (e.g., Jackson (1991)) says about cases where act A maximises expected utility, but act B will as a matter of fact maximise utility. The decision-theoretic consequentialist doesn’t say that there is any kind of dilemma here, or that doing A rather than B is in any sense wrong. Rather, the injunction to maximise expected utility replaces any injunction to maximise actual utility. The normative force of utility maximisation is wholly covered by the rule: Maximise expected utility!. Similarly, this kind of internalist thinks that the relevance of objective norms to actions is wholly covered by their contribution to subjective norms.
Belief or Evidence?
The last dimension I’ll focus on has already come up several times in the presentation of the debate. It concerns whether the normative internalist focusses on what the agent believes is moral/reasonable, or on what the agent’s evidence supports is moral/reasonable. For simplicity, I’ll focus on the moral case in setting the distinction out, though it applies in both ethics and epistemology. Very roughly, the issue is which of these are norms.
- Do the right thing!
- Do the thing you think is right!
- Do what your evidence says is right!
Simplifying a lot, the externalist thinks that 1 is a norm, and 2 and 3 aren’t. The internalist dissents. Whether she says 1 is a norm turns on whether she has a supplementing or replacing view of subjective norms, in the sense of the previous subsection. But either way, she has a choice between 2 and 3. It isn’t an exclusive choice. Especially if she’s already signed up for widespread moral dilemmas in virtue of being a supplementing theorist, she may well say that both 2 and 3 are norms. So imagine that some man objectively should do A, believes he should do B, and has evidence that supports doing C, it might be that whatever he does, he violates two norms. But probably the more popular form of internalism will be to choose between 2 or 3.
The distinction I’m drawing here is a familiar one from the literature on probability. Let’s say our internalist thinks belief is too crude a notion to use here, and instead says the relevant norm is Do what’s probably right! Is that a variant of 2 or of 3? The answer depends on the interpretation of ‘probably’. If the probabilities the theorist cares about are subjective credences, then she’s putting forward a variant of 2; if they are evidential probabilities, then she’s putting forward a variant of 3. And the same goes mutatis mutandis if the theorist complicates things further and says the salient norm is Maximise expected rightness. The notion of expectation here is tied to a notion of probability, and again we need to ask whether this is to be understood as a subjective credence or as an evidential probability in order to see whether we are looking at a sophisticated variant of 2 or of 3. I’m going to focus on very crudely stated norms like 2 and 3 rather than their sophisticated probabilistic variants, in no small part because it avoids just this ambiguity. But when reading theorists who use probabilistic language, we’ll have to be careful in how to translate them into more familiar terms, again because of the ambiguity.
Dimensions and Arguments
I’ve spent some time laying out the different kinds of internalist positions because I think it’s easy to overlook how different they are, and how different are the arguments that support them. The last three subsections have described, respectively, 3, 2 and 3 different ways to be internalist. Not all combinations will be at all philosophically attractive, but that does make in principle 3 × 2 × 3 = 18 positions that I’m opposed to. And some of these could be subdivided further, especially the options that are supplementing rather than replacing. So I’m not short on for opponents!
But these opponents don’t all agree with each other. Indeed, on deep matters, they can’t agree with each other. Let’s consider two of the kinds of arguments that we’ve put forward on behalf of the internalist.
One argument was from the idea that norms should be guiding. If we take this seriously, we are not only led to internalism, but to a very special kind of internalism. We must be internalists both in ethics and in epistemology, or else there are norms that apply without being guiding. We must think these subjective norms replace rather than supplement, or else there are norms that bind an agent without being capable of guiding the agent. And it must be the agent’s beliefs rather than her evidence that matters, because of course the agent could be wrong or uncertain about what her evidence supports.
Another argument we gave was from cases. As best as I can tell, the kind of internalism we get from that is strongly opposed to the internalism discussed in the previous paragraph. It has to be supplementing rather than replacing, on pain of saying that Kaylyn’s actions are strictly better than Durga’s in the most important sense of ‘better’. That’s surely not what intuition about the case says, and we’re considering for this paragraph at least at argument from cases. And it has to focus on evidence not beliefs, on pain of saying that Guinevere is no worse than Kaylyn. Intuition is, I think, reasonably clear that wrongdoing because of motivated reasoning like this is worse than wrongdoing on the basis of sincere and reasonable error.
It’s a little trickier to say what kind of internalism is best supported by the argument from recklessness. But I think it is far from obvious that it will be the same as either of the two forms of internalism discussed in the previous two paragraphs.
Now at one level, none of this matters to the conclusion of the book. I think all the internalist-friendly arguments fail. But I do want to stress that the arguments I’m opposing are not mutually reinforcing. On the contrary, they are mutually undermining.
In any philosophical debate, and especially I think in debates between varieties of internalism and externalism, there is a widespread tendency to search for moderate or middle-ground views. I have a reflexive disapproval of these attempts to find a middle ground. I suspect the truth is usually at one extreme, and even when it isn’t, the debate progresses more fruitfully if more people try to defend extreme views as well as they can. (In many ways the core theses of this book are Aristotelian and anti-Platonist, but here is one respect in which I’m more Platonist than Aristotelian.) To head off some of these attempts to find middle ground, I’m going to discuss three of the more natural moves a moderate might make, and argue that two of them at least are really just ways of taking sides in the debate.
The first is to say that the internalists and externalists are talking at cross-purposes, because they are both right about different things. The normative internalist is talking about subjective normativity, and the normative externalist is talking about objective normativity. Hopefully it is clear from the last two sections that this isn’t a middle-ground rival to the parties to this debate; it is just the most natural and sensible form of internalism.
The second is to say that the theorists are talking at cross-purposes because their theoretical differences turn on differences in how classify kinds of evidence, or the grounds of moral action. What the internalist calls higher-order evidence, the externalist calls just more first-order evidence. And what the internalist calls misleading evidence about morality, the externalist calls first-order reasons to act a different way. Again, it might be clear from what I’ve said above what I’m going to say about this position: it isn’t moderate at all, it is just the most natural and sensible form of normative externalism in epistemology. It is also a form of normative externalism in ethics, though a less natural and sensible view, since it requires that misleading evidence about ethics generates moral reasons. I think that view is more trouble than it is worth, but it is recognisably externalist.
The third moderate position I’ll discuss is, I think, genuinely moderate. It is the view that certain kinds of false moral or epistemological beliefs can exculpate. I’ve classified this as an internalist view, but the classification is a little arbitrary. As I mentioned, I’m going to eventually argue against this position. But I think the arguments are separable from the rest of the book. To the extent that a middle ground even exists in this debate, I think it is here.
Why Call This Externalism?
There are so many views already called externalist in the literature that I feel I should offer a few words of defence of my labelling my view externalist. In the existing literature I’m not sure there is any term, let alone an agreed upon term, for the view that higher-order considerations are irrelevant to both ethical and epistemological evaluation. So we needed some nice term for my view. And using ‘externalist’ suggested a useful term for the opposing view. And there is something evocative about the idea that what’s distinctive of my view is that it says that agents are answerable to standards that are genuinely external to them. More than that, it will turn out that there are similarities between the debates we’ll see here and familiar debates between internalists and externalists about both content and about the nature of epistemic norms.
In debates about content, we should not construe the internalism/externalism debate as a debate about which of two kinds of content are, as a matter of fact, associated with our thought and talk. To set up the debate that way is to concede something that is at issue in the debate. That is, it assumes from the start that there is a coherent notion of narrow content, and it really is a kind of content. But this is part of what’s at issue. The same point is true here. I very much do not think the debate looks like this: The externalist identifies some norms, and the internalist identifies some others, and then we debate which of those norms are really our norms. At least against some internalist opponents, I deny that they have so much as identified a kind of norm that we can debate whether it is our norm.
In debates in epistemology, there is a running concern that internalist norms are really not normative. If we identify justified belief in a way that makes it as independent of truth as the internalist wants justification to be, there is a danger that that we should not care about justification. Internalists have had interesting things to say about this danger (Conee 1992), and I don’t want to say that that it is a compelling objection to internalism. But it is a danger. And I think it’s a danger that I think the normative internalist can’t avoid. Let’s say we can make sense of a notion that tracks what the internalist thinks is important. In section 5.1 I’ll argue that not being a hypocrite is such a notion; the internalist cares a lot about it, and it is a coherent notion. There is a further question of whether this should be relevant to our belief, our action or our evaluation of another. If someone is a knave, need we care further about whether they are a sincere or hypocritical knave?
All that said, there are two ways in which what I’m saying differs from familiar internalist/externalist debates. One is that what I’m saying cross-cuts the existing debates within ethics and epistemology that often employ those terms. Normative externalism is compatible with an internalist theory of epistemic justification. It is consistent to hold the following two views:
- Whether S is justified in believing p depends solely on S’s internal states.
- There is a function from states of an agent to permissible beliefs, and whether an agent’s beliefs are justified depends solely on the nature of that function, and the agent could in principle be mistaken, and even rationally mistaken, about the nature of the function.
The first bullet point defines a kind of internalism in epistemology. The second bullet point defines a kind of externalism about epistemic norms. But the two bullet points are compatible, as long as the function in question does not vary between agents with the same internal states. The two bullet points may be in some tension, but they are logically compatible. (And arguably in his later works Russell held both views, though he also saw the tension between them.)
And normative externalism is compatible in principle with the view in ethics that there is an internal connection between judging that something is right, and being motivated to do it. This view is sometimes called motivational internalism. (Rosati 2014) But again, there is a tension, in this case so great that it is hard to see why one would be a normative externalist and a motivational internalist. The tension is that to hold on to both normative externalism and motivational internalism simultaneously, one has to think that ‘rational’ is not an evaluative term, in the sense relevant for the definition of normative externalism. That is, one has to hold on to the following views.
- It is irrational to believe that one is required to φ, and not be motivated to φ; that’s what motivational internalism says.
- An epistemically good agent will follow their evidence, so if they have misleading moral evidence, they will believe that φ is required, even when it is not. The possibility of misleading moral evidence is a background assumption of the debate between normative internalists and normative externalists. And the normative externalist says that the right response to misleading evidence is to be misled.
- An agent should be evaluated by whether they do, and are motivated to do, what is required of them, not whether they do, or are motivated to do, what they believe is required of them. Again, this is just what normative externalism says.
Those three points are consistent, but they entail that judging someone to be irrational is not, in the relevant sense, to evaluate them. Now that’s not a literally incoherent view. It is a souped-up version of what Niko Kolodny (2005) argues for. (It isn’t Kolodny’s own view; he thinks standards of rationality are evaluative but not normative. I’m discussing the view that they are neither evaluative nor normative.) But it is a little hard to see the attraction of the view. So normative externalism goes more happily with motivational externalism.
And that’s the common pattern I think. Normative externalism is a genuinely novel kind of externalism, in that it is neither entailed by, nor entails, other forms of externalism. But some of the considerations for and against it parallel considerations for and against other forms of externalism. And it sits most comfortably with other forms of externalism. So I think the name is a happy one.