Let me assume for the moment that we are all consequentialists. This means that we all accept that we should do whatever results in the best outcome. But, even if we are in agreement about this, there is still some question as to how we should reason ethically about the situations in which we find ourselves. Should we think just about the consequences of our individual action, or should we think about which rules would guarantee the best outcome, and then follow those rules (a kind of deontology)?
But, since we are working within a consequentialist framework, we have an easy way of determining which method of reasoning is best, namely by seeing which method results in the best outcomes. Thus I have produced the diagrams below, which illustrate two possible situations.
In these diagrams we are to assume that some activity is under consideration, which results in an outcome (good or bad) then depends on how many people engage in it. The far right is 100% of the population, and the origin is 0%.
Situation #1 seems a strong argument for developing and following rules produce the best outcome, rather than trying to achieve the best outcome directly. This is because if we subscribed to that method in situation #1 everyone would reason that if they engaged in the activity there would be some benefit (the difference between zero and one participants), but, because many people would reason in the same fashion, they would end up on the far side of the chart, at an undesirable outcome. Thus we might conclude that we should take a page from Kant, and hold that we should only make a particular choice if the outcome would be good given that everyone made that particular choice.
But that can’t be the right solution either, because of situations like #2. If we reasoned in a Kantian style then we would argue for participating in situation #2, because there would be a good outcome if everyone participated. However, not everyone is ethical, and certainly not everyone reasons in this way. Thus not 100% of the population will participate, and the outcome will be a bad one. And so we can’t accept the Kantian outlook either, because that too results in outcomes that are less than optimal.
Now we might then attempt to modify our Kantian scheme, and say that we should consider the value of the outcome if 70% or 90% of the population made the same choice, but these modifications are also doomed to failure. How many people will participate in a given activity depends on what activity we are considering, what ethical systems they subscribe to, ect. Not only does this vary from situation to situation, but it changes based on the time and place as well. Thus we can’t amend our rule scheme to yield the best possible result, even in principle, because the huge calculation that we would need to incorporate into our reasoning would be so hard to calculate that we would spend all of our time attempting to figure out what is ethical, without leaving enough time to act when appropriate.
But perhaps we are mistaken in thinking that we need to resort to rule based reasoning in the first place. Obviously we expect that the person acting is able to predict, to some degree, the consequences of their actions. Why then have we assumed that they can predict the consequences of their actions but not how many other people will act in situation #1? The consequences of many of our actions depend on the behavior of other people. For example, if I decide to drive through a green light I can only justify doing so because I expect people going through in the other direction to stop, and not run the red light. Thus in situations such as #1 predicting how many other people will act in that way is just one more factor that needs to be taken into account when predicting the outcome of the action, and we already expect people to be able to predict the outcome of their actions with some reliability. Thus the consequentialist in situation #1 should take into account that some people will probably choose to act no matter what (maybe they have a poor grasp of the situation) and thus won’t act in order to avoid the bad result (or to avoid making the bad result worse). And in general most people are fairly good at predicting the actions of other people, and even if you aren’t you can always ask. Thus I think the best strategy is to think only about the consequences of our own actions, and that taking other people’s realistic behavior into account (not wishful thinking about what if everyone acted as we did) is simply part of predicting these consequences.