[Response]
I’m a utilitarian, so in the strictest sense, yes, I believe the effect on the acted upon (and on all other human beings) is the relevant final tiebreaker for morality.
In practice, however, humans operate on the basis of abstract principles and character, and we recognize that people get things wrong sometimes. So I’d distinguish between a case where you know action X hurts person Y and do it, and a case where you think action X helps person Y but turn out to be wrong. The moral fallout of each specific action is the same — person Y gets hurt — but the broad term ‘morality’ also includes a sense of judgment on you as an individual and our appropriate responses to you.
In a more mathematical sense, while each action has identical utility, our judgments of your expected future utility vary: people usually succeed at their immediate goals to help or hurt others. So even though the utilities are the same, our appropriate responses are not, because our responses are intended to influence future actions.
The problem with focusing only on will is that it removes any moral imperative to be correct about the material effects of one’s actions. I think there is absolutely such an imperative, even if we make allowances for human error in practice. In fact, I think this is such an important imperative that it’s part of the three-pronged notion of morality at center stage in my Medium bio.
