I think we all agree that causal explanations are good explanations, where they can be given. But it isn’t as clear why they are good explanations. We might be tempted to loosely define a good explanation as one that leads to understanding. But such a definition doesn’t really improve our position. It is just as hard to say what understanding is as to say what an explanation is. And understanding has unique problems of its own, because whether an individual understands something or not seems to be something only they can say, or, in other words, understanding is subjective. But we also know that people can be fooled into thinking that they understand something when they actually do not (they say they understand, but are then unable to apply what they supposedly understand, demonstrating a lack of understanding). Thus to investigate understanding seems likely to lead us only a position on what makes us think that we understand things, rather than what gives us real understanding.
So let us go back to what an explanation is. I tentatively propose that a successful explanation is one that gives us information (or knowledge) about similar situations. Of course this is still a bit of a vague definition, so let me explicate it with an example. Suppose someone is learning addition for the first time, and is confronted with the problem 2 + 2 = ?. Now let us further suppose that someone tells them that the answer is 4. Have they explained what 2 + 2 is? As I see it they have not, as indicated by the fact that the person who is told that the answer is 4 can’t generalize to other arithmetical problems, and hence by the definition of explanation have not had the addition of two and two explained to them. Now let us suppose that instead of being told that the answer was 4 that they had some algorithm described and demonstrated for them, which yields 4 upon being given 2 + 2. Of course there are two cases here. One possibility is that the algorithm that has been demonstrated works for the case of 2 + 2, but does not always give the correct answer in general. In this case there is a sense in which addition has been explained to them, since they now have information about the result of other problems. But this information turns out to be false, so the explanation is a bad one. The other possibility is of course that the algorithm does work for all cases, in which case arithmetic has been explained to them and the explanation is a good one.
So what is a causal explanation? There are two prominent views as to what saying that two things are causally related entails. One is that to say that A is the cause of B is to say that a certain relation holds between the two events, just as we might point at two objects and assert that the relation of being the same color holds between them. Such a view is of course problematic. For starters it leaves what basis we have to claim that the causal relation should hold between two events utterly mysterious. Generally when we say that a relation holds between two objects it is on the basis of various properties that each has. But the fact that two events stand in a causal relation seems to depend on something other than their properties. For example, consider an event A that we say is the cause of event B. And furthermore suppose that there is some event C, which is exactly like event B, but at a much later time. If A is the cause of B why is it not also the cause of C, as B and C are nearly identical? Of course we can appeal to the fact that C is at a later time, indeed that is the “right” answer, but there is no clear answer why time should matter in this case. Often we say that one event may be the cause of another in the distant future, so it seems wrong to say that Cs being further away in time from A than B is something that by itself can disqualify A from being the cause of C. Now we might try to remedy this by saying that A is the cause of B and not of C because a chain of causal relations connects A to B while it does not connect B to C. Again this is in a sense “right”, but now we are abandoning the idea that causation is just a relation between two objects, and asserting that to claim that two objects are causally related is also to claim there exists some suitable causal chain connecting them. This is of course an improvement, but it still leaves the question of why one thing can be said to be the cause of another unanswered. You see the problem is that we often make causal claims without knowing anything about the existence of the correct kind of causal chain, which must exist at the level of microscopic interactions. We made causal claims long before we had any idea of the existence of a subatomic world, and we continue to make them without giving the subatomic world any thought. Thus to accept this definition of causation would be to hold that most of our causal claims are just idle speculation. And furthermore, to get back on topic, it would also strip them of any explanatory power. Knowing that A is the cause of B would be like knowing that 2 + 2 = 4, it might tell you something about that particular case, but it wouldn’t tell you anything in general. So we would also be moved to reject the idea that causal explanations have explanatory power, again a move that seems to imply that this understanding of causation has diverged greatly from the general use of the term,
Let us thus turn to the other popular understanding of what two events being causally related entails. Specifically we understand the claim that A is causally related to B as asserting that events progress following certain regular patterns, and that the regularities involved in this case are such that without A, or with certain changes in A, B would have turned out differently. Obviously this suffers from fewer epistemological problems than the previous understanding of causation did. There isn’t any substantial problem with deducing the existence of regularities in the progression of events, nor is there in applying this knowledge to guess as to how changes in certain initial conditions would change the outcome. It might, however, not seem to have much explanatory power either. After all, the claim that the two events are causally related has only told us about this one particular situation. However, there are reasons to believe that causal explanations of this sort have real explanatory power. The first, and least important, is that situations exactly the same as the one in which causal claims are being made about could possibly occur again, and thus that the explanation applies to them as well. Secondly there is the implicit assumption that the regularities involved are fairly robust, such that we have valid reason to expect the causal explanation to be valid in similar cases as well. And thirdly the causal explanation may be more detailed then simply “A causes B”; it may include claims as to what features of A caused B, what background conditions were involved, which regularities were involved that led to the expectation that changes in A would have a change in B, and so on. Every added detail allows us to better determine which features of this situation were responsible for A causing B, and thus to be able to generalize from it to other situations sharing those features.
But we already knew that causal explanations must have some value. After all, they wouldn’t be so popular if they weren’t useful. Really considering causation is more of a test to see if this position on explanation holds up well, rejecting problematic understandings of causation by showing that they make causation a bad explanation, and endorsing the less problematic understanding as making causation of explanatory value. Now I was going to turn this theory back on philosophy itself, to see whether philosophy gives good explanations, but I’ll leave that for another day.