There is something that strikes me as absurd about utilitarianism. Now certainly the goal of utilitarianism, maximizing happiness, seems reasonable. (Although it too can be questioned. Is happiness really what we desire for society? If so then why not put drugs in the water?) However the idea that we can get a happiness sum by taking the happiness of individuals and adding them up seems less so. First of all it isn’t obvious that the happiness and suffering of different people are comparable in this way. And, more importantly, it seems to ignore important ethical principles, and run into some difficulties by doing so. For example, it would seem to justify theft, so long as the thief benefits more than the person being robbed suffers. But on the other hand it can’t endorse robbery in general, even when the thief benefits more than the victim suffers, because then society would collapse, an outcome that would not maximize happiness. Now there are ways to get around this apparent contradiction (between recommending certain acts of theft while at the same time condemning theft in general), some better than others, but the fact that such contradictions seem to pop up in the first place hints that there is something wrong with utilitarianism.
Of course arguing against utilitarianism is hard to do, not because it is a perfect ethical doctrine, but because there are so many variations of it. If you put forth an argument against the two main kinds of utilitarianism (act and rule utilitarianism) some will simply take this as evidence that their personal variant of utilitarianism is the right one, if it avoids that particular pitfall. Of course maybe it is, but maybe it avoids that pitfall only because the objection wasn’t tailored with it in mind. In any case my strategy here will be to argue that any scheme in which the only ultimate goal is to maximize happiness, by whatever means (rules, acts, or whatever), is in some possible worlds anti-normative, meaning that the vast majority of people in such worlds have reason to act against the recommendations of utilitarianism, have no reason to act in accordance with its recommendations, and that the outcome of acting in accordance with its recommendations is worse than deviating from them by any non-utilitarian standard. Now we suppose that ethics is normative, and normative in all possible worlds, because if it isn’t normative in all possible worlds then it is possible it isn’t normative here (and to entertain the idea that ethics is non-normative in some worlds seems absurd by itself). This indicates that even if utilitarianism happens to be normative here it isn’t a complete ethical theory, but rather part of a larger theory that applies to all possible worlds. But if it is part of such a larger theory then we would expect utilitarianism to be at best an incomplete ethical theory by itself, even in this world.
Consider then a world exactly like our world, except that this one contains a psychic sadist who lives in a cave and never come in direct contact with the rest of the world. The psychic sadist knows how happy everyone is, and his happiness is inversely proportional to the happiness of the rest of the world. Specifically his happiness is such that if the total happiness in the rest of the world goes up by 1 unit his happiness goes down by 1.5 units, and if the total happiness of the rest of the world goes down by 1 unit his happiness goes up by 1.5 units. Now in such a world utilitarianism tells us that we should attempt to minimize the happiness of the total world minus the sadist in order to increase the happiness of world including the sadist. This means that utilitarianism says that we should, in such a world, start needless wars, torture each other, etc. This seems utterly absurd to me. How can the addition of one psychic in a far away corner of the globe turn the ethical order of an otherwise normal world utterly upside down?*
Of course you know what my position on ethics is, I think we should act so as to maximize the wellbeing of society as a whole. Such an ethical theory, while consequentialist like utilitarianism, does not reverse its judgments because of the addition of one psychic sadist. In fact I see this doctrine as being a candidate for the more complete ethics that I alluded to earlier, which I said that utilitarianism was but one part of. Generally when everything is working properly, and people are getting along and obeying the rules, then maximizing happiness is what is best for society. However, there are cases in which maximizing happiness may be detrimental to society, such as when we take away the happiness of many people in order to make one person happier. Such actions make society less stable, assuming they don’t have some justification other than increasing the total happiness (a justification acceptable to those who are being made less happy), and hence are actually unethical. Here then we have a kind of compromise. We might actually agree that utilitarian reasoning is a good rule of thumb, and that it might make sense to employ it when reasoning ethically about simple and relatively unimportant matters. However when we want to give a detailed ethical analysis of some situation, or make absolutely sure that we are doing the right thing, then we would appeal to the complete theory.
* And it raises the possibility that someone could justify, at least to themselves, any normally unethical act if they believed in the existence of such a psychic sadist. Since people can believe in a benevolent god it seems possible that some people might believe in a hostile god, who enjoys the suffering of the world. Again, it seems absurd that people who are working with the correct theory could justify an inverted set of ethical judgments just because of the addition of one false belief which has no observable consequences.