Is our primary ethical responsibility to always do good and avoid evil as much as possible, or to always avoid evil and to do good as much as possible? When working within a single ethical system these attitudes recommend the same actions, obviously. Complications do arise, however, when we consider multiple systems of ethics. These considerations then can be seen as meta-ethical, even though they should probably have an influence on how we decide to act. This is because no one is absolutely certain that the ethical theory they subscribe to is correct (or at least they don’t have a rational reason to be completely confident). Let me give an example. We shall pretend that there are only three possible ethical theories A, B, and C. A person may think that A is 98% likely to be the correct theory, and give B and C only a 1% chance of being correct. Even so, if B and C label some action as good or bad, and A is silent about it, does that person have a reason to do or avoid that action, on the chance that it might be right or wrong?
Let us consider first the recommendation to always avoid doing the wrong thing. This might seem like a safe course of action, but to actually live by it would be excessively burdensome. For example, some ethical systems recommend treating animals almost as well as humans, and since we can’t be completely sure that these theories are wrong we would have to avoid eating meat, just in case. It would also mean living by all the prohibitions of different religions (to the greatest extent possible in the case of conflicts). Clearly this is ridiculous. Unfortunately the alternative, always doing what is good, is just as bad. For example, some ethical codes recommend giving all of your money to charity, and if we are trying to always do the good thing we should give away all of our money, just in case. Again, this is ridiculous.
So what are our other alternatives? We might attempt to apply some kind of expected value calculation to our ethical choices, since it seems to work well for practical choices where there are multiple possible outcomes. We might then map the goodness and badness of an action under an ethical theory onto the real numbers, multiply that value by the likelihood of that ethical theory being correct, and then take the sum of these values for all possible ethical codes. The action with the greatest result (its summed goodness value) would be the one we should do. Even if we put aside for the moment the difficulties concerning mapping goodness and badness to the real numbers, this method still has some problems. For example, it would still recommend that we never eat animals, since few ethical theories claim that eating animals is good, and thus the value of eating animals would be less than that of not eating animals. One attempt to fix this might be to argue that self-interest, thrown in as a possible ethical theory, would outweigh the ridiculous results. Unfortunately, this is only a possibility if you judge self-interest as more likely to be correct than most of the other alternatives. Since I certainly don’t I am going to keep looking for a solution. Another alternative might be to simply act in whatever manner the ethical theory you judge most possible recommends. This seems to be how most people make their choices, but unfortunately it is arrogant and irrational, since ultimately it is telling us to ignore the possibility that we could be mistaken, which is generally a bad policy.
Usually this is the point where I jump in with my solution, but unfortunately I don’t have one for this problem. Maybe this is because it doesn’t seem pressing enough, since in most cases most ethical theories agree. Feel free to propose your own, but remember it can’t ignore the principles of rationality (even if you are very confident in you chosen ethical theory you can’t simply ignore the possibility of being wrong), it can’t advocate ridiculous outcomes (for example, being bound by the recommendations of nearly every ethical theory in the cases where they don’t conflict), and it can’t be impossible to actually apply (as the expected value calculation would have been, even if it hadn’t suffered from other problems).