Ethically there seems to be a difference between action and inaction. If I steal something then I am to blame, ethically, because of my action. But if someone else steals something I am not to blame, because I am not the one who acted (more specifically, my actions did not cause the theft to be committed). And thus it seems like only a person’s actions should matter for ethical considerations, not the things that they didn’t do. However, there are cases that invalidate this principle, or at least seem to, for example if someone is about to fall into a well, and I can catch them without risking myself, then if I fail to act, and allow them to fall into the well, it would appear that I am ethically to blame, even though my actions in no way caused their fall into the well, at least in most peoples’ opinion.
Thus we are motivated to improve our principle, perhaps this time to hold that a person is ethically responsible for both the results of their actions and the things they could have prevented but didn’t. But now consider the same situation, with a person falling into the well, except that this time I have my back turned, and don’t see them fall. Even in this situation I could have saved them, if I hadn’t been looking the other way, but I think that few would consider me ethically at fault.
And so we might try to improve things by modifying our principle, yet again, to hold that we are responsible for all the bad events that we knew would occur but did nothing to prevent (this covers both our actions and the situations in which we fail to act). But this version has two problems. One is that it doesn’t allow for situations in which we can’t prevent the bad event from happening, for example I may want to prevent the person from falling into the well, but perhaps I am too far away. But this is easily rectified by limiting our principle to consider only events that we are able to prevent. A more serious problem is that it doesn’t hold someone who is willfully ignorant responsible. For example, if you are making a product and that product is harmful to people (unbeknownst to you) it seems that you are still ethically responsible for that harm if you had the opportunity to determine if your product was safe or harmful for sure, but chose to pass up that opportunity up because you didn’t want to know (perhaps because you feared that it might be harmful). And cases of negligence probably serve equally well as counterexamples, although it is a bit harder to phrase them in terms of knowledge.
And this time the principle isn’t so easy to improve because you can’t insist that everyone know everything that will happen, or take every opportunity to improve their knowledge, because then nothing would get done. We would have to devote all our resources to knowing more, and none to acting, for fear that our actions might cause some ill effect that we could have known about, but didn’t.
Let me refer back to the ethical spectrum. So far we have been drawing upon both the right side and the left side of the spectrum. When we were concerned initially with only our actions and their effects then we were on the right side. And when we started to consider knowledge as determining what was right and what was wrong we were on the left side. But we ignored the exact center, choices, and I think that this is where the solution to our problem lies.
Instead of trying to judge only the results of our actions, which ignored inaction, or what we know, which leads to impossible ethical demands, we can simply judge people based on the choices made, which avoids all of the problems mentioned so far. It avoids the problem of ignoring inaction because when you see something happening doing nothing is a choice. If someone starts to fall towards a well you have the option to try and catch them and the option to do nothing, and you pick one of them. But it avoids the problem of ignorance (being responsible for something you could have prevented but knew nothing about) because in such a situation you aren’t presented with a choice between catching them and doing nothing, since you aren’t aware of what is happening there is no choice for you to make. And it does not blame people for things that they are unable to prevent because they can choose to try and save them (if they think there is the possibility of success) and still fail, and be blameless because they made the right choice, or they may realize that it is impossible to save them, in which case they won’t be making a choice between saving them or not (since we only choose between alternative we think are possible, only someone who is mentally ill chooses to jump to the moon). And finally it can handle the problem of willful ignorance, because they made a choice not to have certain knowledge, and if that choice leads to something bad happening (because you could have known about what was happening and prevented it, but didn’t) then you are responsible*.
The elegance of this solution (and the fact that it works while attempting to improve the version of the principle that made its judgments based on what people know is hard if not impossible) is one more reason, in my opinion, to prefer ethical theories that sit at the exact middle of the spectrum.
* Actually the handling of cases of willful ignorance requires a little more detail than this. Whether someone is ethically to blame in this case depends specifically on the nature of the choice they made when they chose to remain ignorant. If they thought that their product was safe, and had no reason to suspect that it was harmful (building on the example previously given) then they are not to blame ethically, since neither option they were presented with seemed to them that it could have any ethical implications, specifically they didn’t think that learning might reveal information that would give them an ethical reason to change their behavior. In such a situation the fact that the product was harmful was a sad accident, but not that person’s fault. But, if someone suspects that their product may be harmful, even if that suspicion is small, then they are ethically to blame for not choosing to learn more, because the choice they made was between ignorance and possibly preventing some harm.