On Philosophy

June 30, 2007

Kylie’s Puzzle, Externalism, and Consciousness

Filed under: Mind — Peter @ 12:00 am

Kylie’s puzzle can be considered a version of the McKinsey reductio of externalism (McKinsey, M. 1991: Anti-Individualism and Privileged Access. Analysis, 51, pp. 9–16). However I find it preferable philosophically to the McKinsey argument, at least in terms of simplicity, as it sidesteps issues of metaphysical dependence and concept agnosticism which tend to get dragged into any discussion of the McKinsey argument. Anyways, Kylie’s puzzle is as follows: Kylie entertains the thought that water is wet. Kylie also has reason to believe that content externalism is correct. Therefore Kylie concludes that she can only be thinking about water if she has lived in an environment in which water exists, and in which there is a community of language speakers that use “water” to designate water (plus whatever other twists your favorite variety of externalism may involve). Thus Kylie concludes that water must exist (and that she lives in an appropriate community, and so on). So Kylie has come to reason solely on the basis her first person experience of thinking that water is wet to the conclusion that the world contains water. But this is absurd, clearly Kylie can’t have reason to conclude that the world contains water on the basis of such a priori reasoning (consider various brain-in-a-vat type situations). Thus either externalism is false or we don’t have privileged first person access to our thoughts. And that is a dilemma, since content externalism seems well founded, but so is privileged first person access to our thoughts, as to deny that would be to essentially deny that consciousness exists (or at least consciousness as we know it).

The goal then is to find a way out that allows us to have both content externalism and privileged first person access to the contents of consciousness. But before I discuss my proposal let me first outline another possible way to resolve the puzzle proposed by Martin Davies and why I don’t think it completely resolves the dilemma (Davies, M. 2000: Externalism and Armchair Knowledge. In P. Boghossian and C. Peacocke (eds), New Essays on the A Priori. Oxford: Oxford University Press, 384–414). Davies’ proposal is that what we are dealing with in Kylie’s puzzle is a kind of begging the question. He compares it to Moore’s argument, that the existence of my hands proves that the world exists, saying that what both arguments have in common is that the first premise requires as its warrant the very thing that it seeks to prove. In Kylie’s puzzle, if externalism is true, then the existence of water is a precondition for the ability to think that water is wet. Thus the argument fails to justify concluding that water exists based only on a priori reasoning. Davies takes this as a resolution of the problems posed by the argument.

I would not dispute the validity of Davies analysis, but I would dispute that it really solves the problem at hand. The problem at hand is that the existence of such puzzles as Kylie’s seem to imply that externalism is incompatible with privileged first person access. And Davies’ solution to the problem is to postulate that thinking that water is wet is only possible if water exists. Which is all fine and good, but we don’t have privileged first person access to whether water exists. Which means in turn that we don’t have privileged first person access to whether we are thinking that water is wet. So Davies’ solution, while technically flawless, is really just a way of biting the bullet in this case.

I would rather look for a way out that doesn’t involve biting any bullets, but to do that I need to take a more detailed look at Kylie’s puzzle. Below is Kylie’s puzzle presented with a new convention, surrounding a word with brackets to refer to its content or reference. For example, [water] designates H2O, the natural kind that we call “water” in this world. (Of course in twin earth [water] would designate XYZ, but given that twin earth situations aren’t involved here we don’t need to take that into account.) The argument is as follows then
1. I think “[water] is wet”
2. I can only think “[water] is wet” if [water] exists
3. [water] exists
By describing the argument this way a possible problem becomes apparent, namely putting the content of “water”, [water], into the thought. Obviously since we accept content externalism what is thought, what our thoughts refer to, their content, depends at least in part on the external world. But obviously what they refer to isn’t apparent to us as we think them, otherwise we could know that water is H2O simply by reflecting on it. Let us designate our conception of water by {water}. The premise of externalism then is that {water} by itself doesn’t determine its content, and I am not trying to object to that, or slyly subvert it. In fact we should probably admit that given a language speaking community there are a number of facts governing which conceptions correspond to what content, and that these facts may vary from community to community. So in our community the fact that ({water}, [water]) is the case (that {water} means [water]) does not preclude the possibility of ({water}, [XYZ]) for some other community in some other place. Note that I always use the words we use to designate things in our language within [] and {}, for the sake of clarity.

Now we can rewrite the argument using this notation:
1. I think “{water} is wet”
2. I can only think “{water} is wet” if [water] exists
3. [water] exists
Note that in this formulation the argument is obviously invalid, or at least missing something. To make if valid we would have to add an additional premise, 1.5, that ({water}, [water]) is the case. But that is clearly a fact that is not available immediately to us, at least not with any confidence. In fact we might very well argue that we could never have such a thought, since that would require an ability to think about the content of terms directly, which seems impossible. (As far as I know it is only possible to think of things by conceiving of them to have certain properties or to be in certain relations to us, but this is not a line of thought I will pursue any farther at the moment.)

This solution allows us to accept both content externalism and the ability to have privileged first person access to our thoughts. However, we do have to make a few concessions. The first is the acceptance of what could be construed as a kind of narrow content, in this case {water}. The second is that we have to accept that the content of thought, so long as that content depends on external factors, is not part of consciousness. It is {water}, not [water] that is part of consciousness, and {water} doesn’t depend on anything external (it’s not even a thing, more like a mental representation). But I don’t think these sacrifices are too big given what we have to gain by making them.

Know of a great paper on the McKinsey-Brown argument? Leave me a comment.

June 29, 2007

Use Without Meaning

Filed under: Language — Peter @ 12:00 am

There are some interesting cases which demonstrate that it is fully possible for a person to use a word competently in most situations but still fail to grasp its meaning, or at least all of its meaning. The example I have in mind specifically here is Putnam’s elm/beech example. Putnam points out that we my be quite competent in using the words “elm” and “beech” in many contexts, and yet be totally unaware of the features that define them as types of trees. Thus we don’t really understand what the words “elm” and “beech” mean, despite our competence in using them in many situations. (Note: What I have to say here regarding this case is not meant to be a response to Putnam. It could be a response to Putnam, but there are far better responses available. The better response is simply to point out that his opponents don’t claim that the representation completely determines what is represented, and, more importantly, that it is the representation and not what is represented that is part of consciousness.)

The first thing to notice is that the elm/beech case is in a way compatible with use determining meaning. Obviously someone who doesn’t know the defining features of elms and beeches will be unable to pick them out from other trees when they come across them. Thus a complete understanding of the meaning of a word could be identified with the ability to use that word correctly in all circumstances. But this still leaves us with a problem; in most circumstances understanding elms as just “a kind of tree” is sufficient to use the word correctly. Thus it would seem that by knowing that definition we must have grasped most of the meaning of the word, since we are able to use it correctly in most situations, when it seems pretty obvious that we have only grasped a part of the meaning of the word, certainly not most of it. We can justify that claim by noting what the purpose of the word is. The purpose of the word elm is to distinguish one kind of tree from others. Thus the meaning of the word elm is tied most closely to the parts of it that allow it to fulfill its purpose, namely the distinguishing physical features of the elm. And so by not capturing those features someone who understands elm only as “a kind of tree” has not grasped most of the meaning (or at least not the core meaning) of the word.

This naturally leads to a more pressing problem: how do we now determine what a word means in the context of a community that shares a common language. Normally we simply appeal to the average or majority understanding of the word. “dog” means dog, we say, because most people in that community have associated with that word in their minds a representation that represents dogs. But, as cases such as “elm” show, it may very well be the case that the majority of the community does not in fact grasp what we are accustomed to thinking of as the meaning of the word. There are basically two ways to deal with this situation. One is to concede that the meaning of “elm” just is “a kind of tree” within that community, and that it only acquires its useful meaning, as picking out a particular kind of tree, within a sub-community who grasps the fuller meaning of “elm”. Although technically flawless I think this is not a very useful move to make. First of all it is hard to identify the sub-community, except by their understanding of the meaning of “elm”. And secondly the sub-community and the larger community have no problems talking with each other, even about elms (most of the time). So it seems unnatural to pick this sub-community out as distinct when they seem anything but. (In contrast to a sub-community who understands certain technical terms, which they are unable to use in communication with the larger community.) I propose then that we instead re-define how we determine what a word means in the context of a community from “the average meaning” to “the most common meaning that most would accept as a suitable meaning”. This definition completely avoids the problem because even if most people understand “elm” only as meaning “a kind of tree” they don’t accept “a kind of tree” as an acceptable meaning for “elm”. By understanding an elm as a kind of tree they understand it as being distinct from other trees in certain ways, because that is what being a kind of tree entails. Thus by understanding that it is a kind of tree it becomes impossible to accept as a meaning for elm simply “a kind of tree” because it doesn’t say what kind of tree the elm is.

Finally we come to the question of how complicated, exactly, is the partial meaning that most people associate with “elm” and “beech”. In his original example Putnam implies that it must be fairly complex, because people would deny that a beech is an elm. This implies that for “beech” most people also associate with it “not an elm”, “not a fig tree”, and so on. But I do not actually thing this is the case. You see most people would probably also deny that a “beech” is a “fagus grandifolia”. This is not because they have some understanding of the terms that motivates them to reject the identification, because “fagus grandifolia” is the scientific name of the beech. Rather it is because people have some intuitive rules about how meaning works, and one of those is that different words usually don’t mean the same thing. Thus they deny that elms are beeches not because the meaning they associate with the terms distinguishes them, but because they have never heard the two kinds of tree explicitly identified, and thus the default assumption that they are different stands. This kind of process is also at work when the meaning of a word is best captured by “the kind of …” or “the thing that …”. In both cases people have assumptions about what kinds of things are (defined by being fundamentally similar in some way) and what things are (have a certain unity) that govern how the word is used without such rules ever making it into the meaning itself. The meaning then remains relatively simple, but some of the concepts involved with it may be complex.

June 28, 2007

Theories About Consciousness

Filed under: Mind — Peter @ 12:00 am

Recently I have been evaluating philosophical theories based on how well they explain, which in turn is determined by whether they can produce acceptable answers to all questions that might legitimately be posed to them. By these standards I judge that Armstrong’s representational theory of consciousness to be a good theory. As are a number of higher order theories of consciousness. But although I think that they are good theories I would not agree that they are in fact theories of consciousness. This of course raises the more general question: how can we tell when a theory captures what it intends to? Of course this is a question don’t usually have cause to ask. Normally theories attempt to capture something that is available, at least in part, to normal perception. For example, if we had a theory about where trees grow and we went out and found that the theory correctly described where bushes grew it would be pretty obvious that the theory was actually about bushes and not trees at all. (Of course we would never actually come across such a theory, since scientific theories are motivated by observation, and so don’t have the possibility to wander away from their intended subject matter like philosophical theories do.)

To resolve this problem I appeal to an older idea, and propose that the things we wish to capture with our philosophical theories, such as consciousness, already have ostensive definitions, and that a theory thus only succeeds in capturing it if the theoretical (categorical) definition roughly coincides with the ostensive definition. The ostensive definition is one that defines what we are seeking to describe as a certain kind of thing defined in terms of certain properties that we can be immediately acquainted with. For example, the ostensive definition of a particular species of tree might be “the kind of tree with leaf shape X, bark type Y, and general shape Z”. And of course our scientific theory may very well define that same species in terms of its DNA. But the two coincide, despite their vastly different definitions, because the scientific theory says that organisms with that kind of DNA will usually become trees with the basically the same kind of leaves bark and shape as the ostensive definition says that they have. Thus the theory about that species of tree in terms of its DNA properly captures that species of tree defined ostensively.

So to settle whether a philosophical theory is properly a theory about consciousness it is necessary to have an ostensive definition of consciousness. But that is a bit trickier than you might suppose. We can’t just appeal to our experience of what consciousness is like, because experience itself is very likely to be part of the theory. Thus such a definition wouldn’t really be ostensive at all, but rather predetermine, to some extent, what form the theory must take (for example, it wouldn’t allow the theory to claim that we were in error about experience). Thus we must take a step further back, and ask ourselves what other properties consciousness has. The key feature of consciousness, outside of facts about the nature of our experience, is that all normal adult humans are supposed to have it. Which means that consciousness is the thing that is common to all people minus whatever varies between them. All that remains to get an ostensive definition of consciousness then is to define what varies from person to person. And that is pretty easy, the quick list is: personality, memories, intelligence, capacity to learn, and emotional dispositions. And this gives us an easy test to determine when we have a theory that properly captures consciousness, or are at least when we are close to one. We can take what the theory calls consciousness, in abstract, and then add in all our individuating mental capacities. If what results could very well be us (if we, who are the authority on being us, couldn’t tell the difference between our current experience of consciousness and this new one so defined) then we have a theory of consciousness.

Let’s apply this ostensive definition then and see why these theories fail to capture consciousness, and thus, by extension, what kind of theory could capture it. First of all the ability to represent the external world by itself clearly doesn’t constitute consciousness, because it doesn’t explain certain facts about experience (namely that out experiences them selves are, from moment to moment, available to us). Higher order theories account for this somewhat (although to give rise to experiences as we experience them they would have to be slightly refined to say that the systems contain representations of previous internal states as a result of direct input/information form past states, and not, say, by deduction). But these theories fail to guarantee that the experience of such systems will be experience from a point of view. And assuming the theory could account for the existence of a point of view it would then need to say why the system would think of its own internal states as having qualia. But, given a theory that could accommodate all that, I think that we wouldn’t be able to tell the difference between being as we are now or such a system plus our individuating mental characteristics. Of course there is the possibility that there is some characteristic of our mental lives that is missed by this account, but that is a fact that I accept by virtue of my current approach to philosophy, that we can never rule out the possibility of someone asking “what about fact X?”.

June 27, 2007

How Dualism Fails To Explain The Mind

Filed under: Mind — Peter @ 12:00 am

This post can be considered an extended application of the theory I described yesterday, which proposed a way to evaluate and compare philosophical theories based on their content alone. Here then I will apply that method to dualism as a theory about the mind, both as an indirect argument in favor of that method of evaluation (because it rejects dualism, which we are already motivated to reject for other reasons), and as a way of further detailing how exactly theories should be evaluated by providing an example of just such an evaluation.*

Let’s start with one of the ways in which dualism is claimed to be superior to materialism. Dualism, it is often argued, is to be preferred over materialism because materialism can’t explain various mental phenomena, usually intentionality and qualia. Now under this evaluative scheme this is in fact a valid complaint. If a theory about the mind can’t answer “why does X feel like it does?” then it has left some questions unanswered. Now let’s set aside whether materialism really can’t answer such questions, because to get into that would involve trying to answer them, and that would take all day. Instead it suffices to point out that dualism can’t answer them either. Let’s say that a specific input feeling like red is explained by appeal to some non-physical mental stuff that makes it feel like red. But why then does that mental stuff make it feel like red (rather than, say, green)? That is a question that dualism is equally unable to answer. And in fact the answer it does give may fail the content criterion (it could equally well explain why that input feels like green or blue). So dualism leaves just as many questions unanswered as materialism does, and so on these topics it is no better than even the most simplistic materialism as an explanation.

Dualism, in most of its forms, also leaves unanswered the question “who has a mind?” which materialism does answer. Materialism automatically answers this question because it identifies the mind with some physical stuff or operation of the physical stuff. Thus who has a mind is a question that can be answered by studying the physical properties of things to determine if they have the right stuff or operate in the right way. Now a dualist theory could postulate some additional law governing which physical systems mental stuff accompanies. That does answer this particular question, but leaves open another question, which I will raise below.

To actually explain the mind, assuming that we have a dualist theory which goes beyond a vague appeal to mental stuff, the dualist theory must posit theoretical entities that interact so as to explain the mind, intentionality, experience, and so on. Let’s consider a very simple dualist theory that posits that the mental stuff is divided into two parts, which can be in a number of different states, and that which change state based on their interactions. These two parts and their interactions then supposedly explain the mind. We could say: because part A was in such and such state, and caused part B to be in some other state we experienced X. This is a pattern that must in fact be followed by every explanation in every domain; a number of entities (possibly things, possibly properties) are posited and then the state of those entities and the rules governing them act as an explanation. But here we run into problems for dualism. Nothing stops us from identifying those entities, their states, and their interactions with some of the physical components of the brain (well, in theory you could argue that the brain just doesn’t work that way, that no corresponding parts exist, but we don’t know enough about the details of the brain to possibly prove such a claim yet). Thus every successful dualist explanation could equally well be a materialist explanation. (I call this the Chalmers problem. Chalmers posits that the non-physical mental properties parallel the information processing properties of the system. But if they parallel them perfectly, and thus explain the mind, why not just identify them?) But the dualist explanation posits something more than the materialist version of the same theory does: it must posit additional laws governing a new domain of mental stuff that makes it behave in this way and stick to the right sort of physical systems. (The materialist version obviously can just invoke the laws of physics and the structure of the brain.) Now we might be tempted to ask “why do these laws obtain?”, but that is a nonsensical question. If they really are fundamental laws, like the laws of physics are, such a question doesn’t make sense. But we can ask “how could we know that these laws obtain?”. And the dualist cannot give an answer to this question because they can’t know that the laws obtain.

Now of course we wouldn’t expect dualism to answer the question in the same way that materialism would. Materialism would appeal to neuroscience, and say that we know that the rules obtain because of our knowledge of neuroscience (or will know in the future when neuroscience develops further). Essentially then materialism answers the question by appealing to another discipline. Dualism is thus more like physics with respect to this question, as already indicated. And physics answers it by an appeal to the best explanation, saying that the physical laws are the best description of the observations made so far. But the laws of the non-physical domain cannot be known in this way, not even by appealing to our own mental life as evidence. Because our own mental life could be explained equally well by the materialist counterpart of the dualist theory, the one that identifies the theoretical entities with some facts about physical stuff rather than non-physical mental stuff. So while we might be able to know that the mental laws are such-and-such we could never know that they are the laws of a non-physical domain.

Now I wouldn’t say that these problems are unique to dualism. Any time we posit non-physical stuff as an attempt at an explanation we are going to run into basically the same problems. For example, Frege posited the sense to explain reference. But that theory can’t answer the question “why does a particular sense refer to what it does?”, just as dualism can’t say why the non-physical stuff is responsible for qualia and intentionality. And if the sense is supposed to be non-physical then how do we know that senses behave as the theory posits that they do? Obviously any evidence we obtain based on our observations of language use could equally well supports a theory that identified the sense with some feature of the language center of the brain, and that theory could confirm that the senses actually behaved as we posit they do by studying the physical stuff they are identified with directly. So, as a broader claim, we might very well conclude that positing non-physical stuff in general isn’t good philosophy.

* Of course here I am picking on dualism again. Why do I pick on dualism so much? It’s not because the possibility that dualism might be true worries me. Although we don’t have a complete theory of mind yet the progress already made towards one strongly suggests that materialism is true, as neuroscience and psychology pull closer and closer together. Rather I tend to pick on dualism because the philosophy of mind is my area of expertise, and thus I judge that I am least likely to make a mistake while picking on it. And occasionally I do need to pick on a philosophical position in order to test whether a certain argumentative strategy works or not, just as a logician might test out a new rule of inference within the theory of natural numbers, in order to make sure it gives the results that are expected before applying it to new domains.

Additionally dualism embodies what I see as a very poor philosophical strategy, namely that of primarily attacking an opposing position and then claiming that the problems with that position give yours support (of course everyone does this to some extent, but most go on to argue for their theory itself by detailing its virtues and explanatory power). Dualists love to find flaws with materialism and then conclude that those flaws disprove materialism and thus prove dualism. But of course this is a silly strategy. First of all dualism is not the only alternative to materialism, and so even if we were to toss out materialism that would not force us to conclude that dualism was correct. And, more importantly, dualism has many flaws as well, as pointed out by materialists. If flaws with materialism are supposed to count against it than flaws with dualism should count against it as well. But dualists never seem to realize this, and seem to think that if they can just rule out materialism then dualism would be validated. This method, besides being relatively unconvincing, is simply a poor way to construct a theory, because less time is spent on the dualist theory of mind itself than is spent worrying about how that theory can be established and respond to objections (not to say that materialists never do this, but most of them have moved on, and focus more on what representational/functional properties are responsible for consciousness).

June 26, 2007

Successful Explanations (Or: How To Evaluate Philosophical Theories)

Filed under: Metaphilosophy — Peter @ 12:00 am

As frequent readers may know one of my many projects is to try and find a way to evaluate philosophical theories based on their content (instead of the arguments for them). Here then is my first proposal, partly unfinished, as to how that evaluation could be done. It comes from understanding the point of philosophical theories as the explanation of certain features of the world either not easily measurable or normative, and thus not captured by science. What then makes an explanation successful? I think the answer is actually very simple (and its very simplicity is why it proved so elusive); a good explanation answers all questions.

Of course what questions are acceptable to ask and what counts as an answer are aspects of this proposal about explanation that need to be spelled out in complete precision, or relying on this standard will prove less objective than hoped, as people will disagree about whether a theory has really answered all the question.

Let me begin then by outlining the three ways in which a question may be answered by a theory:

a) The question is directly answered by the content of the theory, either because the theory directly makes a claim about it, or the theory makes a claim about it because it follows from the operation of certain theoretical entities. However, a qualification must be placed on such answers, it must not be the case that the theory can equally well explain the opposite as well (the content requirement, thus ruling out “magic” as being the best possible explanation). For example, a psychological theory might make claims about the internal state of Bob, and from that internal state answer the question “why does Bob want a hamburger?” by appeal to that internal state. However this answer would not count if the theory could equally well answer the question “why does Bob not want a hamburger” by appeal to the same internal state. Of course things can get tricky in scientific theories when separating observations that give us the information necessary to start explaining things with our theory and observations that we can reasonably expect the theory to say something about. But that doesn’t need to concern us too much when dealing with philosophy, as generally philosophical theories are less concerned with explaining specific events.

b) The question is answered by appeal to other theories (by saying that a particular fact or a particular mechanism is the domain of some other theory). Theories appealed to in this way may not actually have the answers to these questions yet, but such answers fail to count as answers if the theory appealed to doesn’t provide the right answer (if it denies that the fact is true or the mechanism exists). For example, an evolutionary theory may appeal to molecular biology in order to explain how traits are inherited (to answer the question “how are traits passed from one generation to the next?”). However this would not count as an answer if molecular biology failed to provide any mechanism by which traits could be inherited. It is reasonable to expect every philosophical theory to answer at least some questions in this way, if it didn’t that would imply that it was self-contained, which in turn would imply that it wasn’t about the world, since the world is described successfully, at least in part, by other theories.

c) The question is revealed as nonsensical or stemming from a different understanding of the concepts involved. Note that this is different than asking a question that is properly answered with a denial, such as “why is the sky green?”. Usually such nonsensical questions occur because we are demanding a kind of explanation that is inappropriate for the domain we are applying it to. “What is the purpose of the sun?” is one example of such a nonsensical question, since the sun wasn’t designed. Now we might say that the sun has no purpose, as an attempt at an answer, but what is really going on is that we simply are unable to apply the methods by which we would normally get an answer, namely by looking at the factors that guided the design of the sun, but there are no factors, and thus no way to get an answer using those methods. Similarly “what is the cause of the big bang?” is a non-question under some theories about it, coupled with a scientific understanding of causation, as causation requires us to find an event earlier in time that leads to that later event under natural law. But obviously it is impossible to find such an event before the beginning of time and outside of natural law. And so at best we can say that there is no cause. Such answers are, however, very rare, and I wouldn’t expect to encounter them often while evaluating a philosophical theory.

And for complete clarity it is also necessary to spell out what questions we can ask of the explanation. Because obviously it is wrong to say that fluid dynamics is a poor explanation of the way liquids work because it cannot answer questions about the nature of truth. Here I must be less precise. Let us simply agree that each theory has some specific intended object that it intends to explain (fluids, meaning, mind, ethics, …). Given that two questions naturally arise: “when is that thing present?”, and “what rules govern it?”. But admittedly these questions are a bit vague. And usually for any topic that a theory is intended to address there are a number of traditional questions about the topic, and it is reasonable to expect the explanation to answer them as well. In addition to these questions there are more, unique to each individual theory, in the form of “why is that the case?” about specific aspects of the theory. In the case of fluid dynamics we might ask, for example, “why does law X govern the fluid?”, which could be answered as b) above, by appealing to the micro-physics of the fluid. And of course we are free to ask questions about the answers given to other questions, until our questions cannot be answered, or are answered by appealing to other theories (response b) or are revealed as nonsensical (response c).

Given that I have spelled out, somewhat vaguely, what the ability to answer all the questions posed to a theory entails allow me to lay out a few reasons to think that this a promising way of evaluating philosophical theories. The first is that at a certain level scientific and philosophical theories are both theories about the world. Thus whatever method we endorse for evaluating philosophical theories should also apply to scientific theories, restricted to measurable phenomena. And indeed this way of evaluating theories, so restricted, becomes falsification by experiment. You see one kind of questions scientific theories are supposed to answer is “why did X occur”, where X is some specific event. Thus a successful scientific theory is one that explains all the observable phenomena. However if some phenomena is observed that the theory fails to predict, or predicts incorrectly, then the theory is falsified, revealed to be an incomplete explanation, and this is the core of the scientific method. And in our terminology falsification is simply the discovery of specific questions, “why did Z occur?”, that the theory can’t answer. So the scientific method is indeed revealed to be a special application of the general method outlined here.

A second reason to like this approach to evaluating philosophical theories is that it admits of degrees of wrongness. An explanation can fail by leaving some questions unanswered and still be a better theory than its competitors that leave more questions unanswered. And, hand in hand with this, is the fact that which questions it is unable to answer reveal exactly where the theory is in need of improvement or elaboration. Again this can be seen as a reflection of the scientific method, where the phenomena that are wrongly predicted reveal the domains that the theory fails to capture. In any case this is certainly better than trying to refute a philosophical theory by showing that it contradicts itself or some other premises, because doing that reveals nothing about where the problem lies; it is even possible for the problem to lie in the way that the contradiction was derived.

The third reason to believe that this may be a promising way of evaluating philosophical theories is because it rejects as faulty some philosophical theories that I feel fairly confident are lacking. But of course this may very well motivate some to reject my account of what counts as a good explanation, if they like those explanations more than they like this theory about evaluating explanations. Oh well. Let me describe three theories it rejects. The first is dualism as an explanation for the mind. Dualism can’t answer quite a few questions, even in principle, about the mind, starting with: “why do minded systems have minds?”, “how does the non-physical mind generate experience?”, and so on. Of course the fact that dualism can’t answer many such questions is no surprise, because the theory is motivated mostly by a belief that the mind can’t be material, and not by a convincing dualist theory about the mind. A second philosophical theory revealed as lacking is the causal theory of names. The causal theory of names is an attempt to answer the question “why does a name refer to the individual that it does?” in the context of theories including rigid designation by positing that users of the name can trace back their use of that name by some causal chain to an original naming event. But that raises the question, unanswerable by the theory, “why does that causal chain affect what the name refers to?” (revealing that the theory has not actually answered the original question, which was really meant to uncover how reference works in the case of proper names). Again, perhaps not too surprising since rigid designators and the causal theory of names were motivated primarily in response to perceived problems with definite descriptions, and not as part of a theory about reference that was independently compelling. And, finally, we come to god in general invoked as an explanation. God can’t serve as a successful explanation because it fails to answer questions about why god acts as he does. Of course partial explanations such as “god is good” answer general questions about god’s behavior, but not specific ones (why did god do X instead of Y), because they can equally well explain why god did Y if he had done that instead of X (the content requirement for type a) answers). (You might still consistently claim, if you subscribe to this method of evaluating philosophical theories, that god is good theology, just not good philosophy.)

Naturally since this is a philosophical theory about how to evaluate philosophical theories we should be able to evaluate it by its own standards. Most questions naturally posed by this theory have obvious answers, I think, although given that this method works a lot like falsification there is always the chance that someone will come up with a pressing question about explanation I haven’t anticipated that it fails to answer. Let me just address then the biggest question: why should we expect this method to yield true philosophical theories? That is a question I think has to be answered by epistemology, but let me sketch you an answer instead of leaving you completely in the dark. Obviously epistemology justifies the scientific method (if it didn’t it wouldn’t have an answer to the question “why is the scientific method so successful?”). And for the same reasons that it justifies the scientific method I expect it to justify this method, because false theories are eventually weeded out by their inability to provide answers, or by their providing the wrong answers (something that is especially likely to happen in the way they interface with other theories).

Next Page »

Create a free website or blog at WordPress.com.