If someone wants to treat their intuitions concerning what they should and shouldn’t do as defining what is and isn’t ethical there is nothing I can say to “prove” them wrong. At best all I can say is that I at least, and many others as well, mean something else when we use the word ethics. I can, however, demonstrate that such an approach to ethics is rather pointless. A theory about right and wrong developed in this way essentially amounts to patting oneself on the back; it serves no possible purpose. In other terms we might say that such a theory fails to resolve the “Nazi problem” or the “monster problem”, which is to say that if we happen to be monsters such a theory will not lead us to be better people. (This problem isn’t restricted to ethical theories either, I would argue that many philosophical methods can’t overcome this problem either, they only have the potential to endorse what we are doing, not correct us if we are wrong.)
Some attempt to solve this problem by accepting the idea that if somehow we could all get from our intuitions, which vary from person to person, to the same ethical theory then the monster problem would be solved. Surely, they reason, if we all arrive at the correct ethical theory then this must eliminate the problem, because the monsters arrive at that theory too. Putting aside for the moment whether such a convergence is possible, even if it was the case that we could all get to the same ethical theory I am not convinced that this would mean that it was the right theory. All it demonstrates is that the intuitions we actually share aren’t too far apart. And certainly that doesn’t make them right, because we can easily imagine a monster world, where everyone has monstrous intuitions. But, unless we are to somehow justify our ethical intuitions themselves, this is essentially the only way that the monster problem can be solved, and so I will simply pretend that such a solution works for the remainder of this post, and focus on how such a convergence might be reached.
One reason that our intuitive ethical judgments might disagree is because of our intuitive acceptance of certain principles that extend our ethical judgments to cases where we are less certain. For example, if one person accepts the general principle that killing is always wrong and another person rejects that principle and accepts that killing can be justified in some circumstances then their ethical judgments will be in conflict some of the time. And it is not clear how such disagreements might be resolved. Such considerations motivate a retreat from general principles. Instead of general principles perhaps what we should take as the intuitive foundations of ethics is specific cases, that is our intuitions regarding whether specific actions in specific situations are right and wrong. Obviously there will be some disagreement about specific cases as well. Thus we must accept that our ethical intuitions about specific cases may be in error as well (or deny that a convergence is possible). However, this is not quite the same problem that we faced when considering general principles, because it is the case that our ethical intuitions with respect to these specific situations come in different strengths. And thus we might argue that while not all of these intuitions can be trusted that for each individual there is a certain number of their strongest intuitions (the exact number may differ between individuals) such that the members of this set are in perfect agreement with the intuitions in the corresponding sets for other people. Obviously to ensure that this agreement actually exists the set for each person must be relatively small. And from this small set, we suppose, we should generalize, which leads to ethical rules that cover all situations, and which may overturn some of our ethical judgments, and which everyone is led to construct by this process since everyone generalizes from sets of intuitions that agree with each other.
If this could actually be done it would be the best possible outcome for doing ethics on the basis of intuition, while it can’t overcome the monster problem in all its forms it does succeed given the assumption that we aren’t completely monsters. But, unfortunately for doing ethics based on intuition, it doesn’t work. The problem lies in the nature of how we generalize from the intuitions in this limited set to more general laws. Obviously there are many ways to generalize from a limited number of examples to more general rules, I am going to consider only two ways of generalizing from them, but the problems facing these two ways of generalizing are shared by all methods of generalization in this context, and illustrate why this method can’t work. First consider the simplest way to generalize, by turning specific features of the situation into generalities. For example, we might generalize from the claim that it is wrong for A to kill B in room Y to the claim that it is wrong for A to kill B in any room. But generalizing in this way can’t be endorsed because it is to effectively slip in unnoticed moral judgments, specifically as to what features of the situation matter. It is equally possible to generalize from that situation to the rule that it is wrong for A to affect B in any way in room Y, by turning some other feature of the situation into a generality. Since people won’t agree about which features are ethically relevant in all cases this way of generalizing won’t lead to universal agreement. We might attempt to advert this problem by instead constructing general rules on the basis of similarity, treating all features of the situations equally, claiming specifically that the ethical status of a situation is the same as the ethical status of the situation in our limited set of basic ethical claims that it is most similar to. This is an objective way to go about constructing general rules (everyone will construct the same general rules), but now it suffers from making highly unintuitive claims. If our basic ethical intuitions are that it is wrong for A to kill B in room Y and that it is right for C to give a gift to D in room Z then we will judge C killing D in room Z to be right because it much more similar to the second situation than the first. Of course we could avoid this problem by making our basic intuitions include fewer irrelevant facts, but then we are faced with the problem we went down this path to avoid: not everyone agrees as to which features of the situation are unimportant and so there will be no general agreement.
And that is the end of constructing ethical theories on the basis of intuitions, unless we are willing to give up on solving the monster problem in any way. Obviously part of the problem with constructing ethical theories on the basis of intuition is that intuitions are highly subjective and variable. But another problem plaguing this approach is that is essentially a descriptive approach to ethics, it aims at creating a theory that labels situations as right or wrong. This contrasts to a normative approach, which focuses on understanding normativity (what we should and shouldn’t do), and then on the basis of that understanding of normativity labels situations as right or wrong. Studying normativity seems to be a better approach because we need only to decide the facts about normativity, a single area of inquiry, rather than a myriad of different situations, where it isn’t clear where a unified theory will come from or how we should get to that unified theory from the essentially unrelated situations.