On Philosophy

May 30, 2007

The Psychic Sadist

Filed under: Ethics — Peter @ 12:00 am

There is something that strikes me as absurd about utilitarianism. Now certainly the goal of utilitarianism, maximizing happiness, seems reasonable. (Although it too can be questioned. Is happiness really what we desire for society? If so then why not put drugs in the water?) However the idea that we can get a happiness sum by taking the happiness of individuals and adding them up seems less so. First of all it isn’t obvious that the happiness and suffering of different people are comparable in this way. And, more importantly, it seems to ignore important ethical principles, and run into some difficulties by doing so. For example, it would seem to justify theft, so long as the thief benefits more than the person being robbed suffers. But on the other hand it can’t endorse robbery in general, even when the thief benefits more than the victim suffers, because then society would collapse, an outcome that would not maximize happiness. Now there are ways to get around this apparent contradiction (between recommending certain acts of theft while at the same time condemning theft in general), some better than others, but the fact that such contradictions seem to pop up in the first place hints that there is something wrong with utilitarianism.

Of course arguing against utilitarianism is hard to do, not because it is a perfect ethical doctrine, but because there are so many variations of it. If you put forth an argument against the two main kinds of utilitarianism (act and rule utilitarianism) some will simply take this as evidence that their personal variant of utilitarianism is the right one, if it avoids that particular pitfall. Of course maybe it is, but maybe it avoids that pitfall only because the objection wasn’t tailored with it in mind. In any case my strategy here will be to argue that any scheme in which the only ultimate goal is to maximize happiness, by whatever means (rules, acts, or whatever), is in some possible worlds anti-normative, meaning that the vast majority of people in such worlds have reason to act against the recommendations of utilitarianism, have no reason to act in accordance with its recommendations, and that the outcome of acting in accordance with its recommendations is worse than deviating from them by any non-utilitarian standard. Now we suppose that ethics is normative, and normative in all possible worlds, because if it isn’t normative in all possible worlds then it is possible it isn’t normative here (and to entertain the idea that ethics is non-normative in some worlds seems absurd by itself). This indicates that even if utilitarianism happens to be normative here it isn’t a complete ethical theory, but rather part of a larger theory that applies to all possible worlds. But if it is part of such a larger theory then we would expect utilitarianism to be at best an incomplete ethical theory by itself, even in this world.

Consider then a world exactly like our world, except that this one contains a psychic sadist who lives in a cave and never come in direct contact with the rest of the world. The psychic sadist knows how happy everyone is, and his happiness is inversely proportional to the happiness of the rest of the world. Specifically his happiness is such that if the total happiness in the rest of the world goes up by 1 unit his happiness goes down by 1.5 units, and if the total happiness of the rest of the world goes down by 1 unit his happiness goes up by 1.5 units. Now in such a world utilitarianism tells us that we should attempt to minimize the happiness of the total world minus the sadist in order to increase the happiness of world including the sadist. This means that utilitarianism says that we should, in such a world, start needless wars, torture each other, etc. This seems utterly absurd to me. How can the addition of one psychic in a far away corner of the globe turn the ethical order of an otherwise normal world utterly upside down?*

Of course you know what my position on ethics is, I think we should act so as to maximize the wellbeing of society as a whole. Such an ethical theory, while consequentialist like utilitarianism, does not reverse its judgments because of the addition of one psychic sadist. In fact I see this doctrine as being a candidate for the more complete ethics that I alluded to earlier, which I said that utilitarianism was but one part of. Generally when everything is working properly, and people are getting along and obeying the rules, then maximizing happiness is what is best for society. However, there are cases in which maximizing happiness may be detrimental to society, such as when we take away the happiness of many people in order to make one person happier. Such actions make society less stable, assuming they don’t have some justification other than increasing the total happiness (a justification acceptable to those who are being made less happy), and hence are actually unethical. Here then we have a kind of compromise. We might actually agree that utilitarian reasoning is a good rule of thumb, and that it might make sense to employ it when reasoning ethically about simple and relatively unimportant matters. However when we want to give a detailed ethical analysis of some situation, or make absolutely sure that we are doing the right thing, then we would appeal to the complete theory.

* And it raises the possibility that someone could justify, at least to themselves, any normally unethical act if they believed in the existence of such a psychic sadist. Since people can believe in a benevolent god it seems possible that some people might believe in a hostile god, who enjoys the suffering of the world. Again, it seems absurd that people who are working with the correct theory could justify an inverted set of ethical judgments just because of the addition of one false belief which has no observable consequences.

Advertisements

5 Comments

  1. Fun example! One worry with these kind of “utility monster” cases is that it isn’t entirely clear that we can fully conceive of the alleged scenario. At least, I have trouble imagining any one individual’s welfare (or “happiness”) descriptively outweighing the aggregate of everyone else’s. That description seems like the absurd part. The utilitarian’s claim — that a descriptively greater benefit is thereby a normatively more important one — doesn’t seem obviously absurd at all.

    Aside, I’m curious as to what the “wellbeing of society as a whole” is, if not the aggregate wellbeing of the individual members. (It’s something I recently explored a bit, here.) If you’ve discussed this before, could you provide me a link?

    Comment by Richard — May 30, 2007 @ 3:46 am

  2. Sure in this world it is absurd, that is why we are considering a possible world. Certainly people can differ in how happy they are, so it isn’t impossible. Also, link: https://onphilosophy.wordpress.com/2006/09/16/good-results-and-the-value-of-individual-lives/.

    Comment by Peter — May 30, 2007 @ 8:42 am

  3. Thanks for the link.

    On the first point, just because “people can differ in how happy they are”, it doesn’t follow that there isn’t an upper bound on one’s welfare. (By ‘absurd’, in this context, I really meant ‘inconceivable’.)

    You ask, “How can the addition of one psychic in a far away corner of the globe turn the ethical order of an otherwise normal world utterly upside down?

    One plausible answer is that it’s because the addition turns upside down the individual welfare facts of the world, and individual welfare matters for ethics. Again, the aspect of the scenario that my intuitions rebel against is the suggestion that we’ve described a case where war, torture, etc. predictably cause a net increase in human welfare. On the other hand, if we take this stipulation seriously, it no longer seems so crazy to say that such (beneficial) actions are immoral in that world. So I’d be wary of drawing any sweeping conclusions from the thought experiment.

    Comment by Richard — May 30, 2007 @ 4:58 pm

  4. Certainly that doesn’t convince me that it is impossible. For one I don’t think that impossible = inconcievable. Secondly it seems concievable to me.

    Of course if you are willing to embrace the absurd conclusion then there is nothing I can say against you. But then again you can’t possibly have any leverage against competing ethical views either, because supporters of such systems equally can just accept any apparent absurdities with them that you bring up.

    And it doesn’t also seem absurd to you that the belief in the existence of such a psychic could give people justification to act imorrally, given that the belief has no observable consequences. Of course if two people disagree whether some chemical is harmful they will have differing ethical beliefs about its use. But doesn’t it seem absurd that a single beleif could turn everything on its head? Especically a belief that doesn’t purport to affect the observable outcome of events in the least.

    Comment by Peter — May 30, 2007 @ 5:42 pm

  5. Does it count as a single belief if, say, you think there’s a whole planetful of people out there who suffer momentously whenever you act kindly? It seems like such momentous beliefs, which turn the factual world on its head, could likewise turn the moral world on its head.

    As suggested before, I think it’s purely the factual component that’s absurd. The moral implications seem fine, bearing in mind the peculiar circumstances. Note that I’m not “just accept[ing] any apparent absurdities” — I just don’t share your confidence that the ascribed moral conclusion really is so transparently “absurd” or otherwise inappropriate-seeming for that given situation.

    If I knew that helping someone would thereby impose an even greater harm on someone else, that knowledge would give me pause. And if the most good we can do for the people in the world necessarily involves imposing a lesser cost distributed among others, then that sounds like something worth doing.

    Comment by Richard — May 31, 2007 @ 4:46 am


RSS feed for comments on this post.

Blog at WordPress.com.

%d bloggers like this: