On Philosophy

October 26, 2006

Knowledge Theories

Filed under: Epistemology — Peter @ 12:00 am

What should we require of a theory about knowledge? Should we require that having such a theory, and following, it yield the greatest possible number of true beliefs* that one could have (or those most likely to be true), or that following it eliminate our false beliefs? If either of those were to be a criterion for a theory about knowledge then constructing a successful theory would be easy; one could report the objects of their experience, as objects of experience, constantly (“I think that I am perceiving a chair”) and form as many true beliefs as possible in that way. Or one could believe nothing, and thus eliminate the possibility of having false beliefs altogether. Neither of these alternatives is a good theory of knowledge.

The problem with thinking about knowledge in this way is that we are thinking too globally, we are thinking of our theory of knowledge as something that would guide our entire process of belief formation, possibly even our thinking as a whole, which is not something that we should expect to happen. Instead we need to return to the purpose of knowledge, in everyday life, which is to allow us to accomplish specific goals. If I want to go out to eat I need to have knowledge about the location of restaurants and how to get to them. A theory of knowledge, then, should enable us to examine specific beliefs (such as: I believe that I can drive) and be able to recommend that we accept them or reject them in a way that would maximize our success in terms of achieving our other goals.

We might then be tempted to jump right in and start creating rules, perhaps probabilistic ones, which would aid in determining whether we should accept a certain belief as knowledge. The exact nature of such rules is, however, best left to disciplines such as ameliorative psychology. There are several reasons for this, one is that creating such rules can involve careers worth of focused study and experiment (such as was the case with SPRs), which is a task too specialized to be part of philosophy. In addition what process to use in determining which beliefs to have is something that is dependant on the situation. One may choose to use a less reliable process if it is faster than the more reliable one when the results don’t matter as much.

What a theory about knowledge can, and should do, is create criteria that any rule purporting to separate good beliefs from bad should meet and show how, from the perspective of someone seeking knowledge, such rules should be chosen (i.e. so one doesn’t pick a bad rule with the mistaken belief that it is good)**. So what are these criteria? Well, for one, a knowledge-identifying rule should never pick out as knowledge any belief that is irrational. And rationality covers a good deal of ground; perhaps more than one might think (see here and here). Another good criteria is that the rule be reliable (that is, as far as the person adhering to it can tell); reliability isn’t something that one can determine right away, but in general if the beliefs picked out as knowledge turn out to be more wrong than they were right more than half of the time then one is better off guessing. We could probably come up with more criteria than these, and in a more systematic fashion, but that it is a project I will leave for another day, I simply wished to demonstrate what the content of such a theory might be. As a final reminder I should mention, again, that the beliefs that are picked out by these rules, to the best of the rule follower’s ability, are what is to be called knowledge, even if the person is deluded into believing that they have followed the rules correctly; knowledge is a limiting condition on how “good” one can make ones own beliefs, not an absolute standard.

Knowledge, as I have described it here, is simply a belief that has been validated by some knowledge-identifying rule. And obviously since some rules are “better” than others knowledge will come in degrees with the beliefs selected by “better” rules being considered “better”/more accurate/more likely knowledge. And this means that it is incorrect to claim that a belief is knowledge unless some rule has picked it out. And since the knowledge-identifying rules are not domain-specific (the same rules apply to beliefs about different subjects, even though the source of evidence for these beliefs may differ) this means that one can’t hold some beliefs to different standards and still claim that they are knowledge. For example, one can’t consider a belief formed by some intuition as knowledge unless one is following a rule that claims that all intuitions are knowledge, and it is unlikely such a rule would pass the reliability criterion, even if some of the beliefs it picks out as knowledge (such as “I just know plants have feelings”) are un-testable; holding un-testable beliefs to a different standard is not defendable.

* Obviously I am using belief in a loose sense here. Beliefs are, by definition, unconscious (see here), and thus altering one’s beliefs may be a task more difficult than consciously following some procedure. Fortunately it doesn’t matter, because when we talk about beliefs in this context we really mean something like: statements that are used in conscious planning, reasoning, and thought as if they were true. One might know that insects aren’t threatening and still have the belief (demonstrated by a phobia) that they were.

** Perhaps strictly speaking this is no longer a theory about knowledge, but a theory about knowledge acquisition, but in terms of philosophy they are essentially the same thing, because if we have such a theory about knowledge acquisition we can simply define as knowledge the beliefs that it recommends we have.



  1. “A theory of knowledge, then, should … be able to recommend that we accept (beliefs) or reject them in a way that would maximize our success in terms of achieving our other goals.”

    It seems to me that we should probably put some restrictions on what these “goals” may or may not be, otherwise we may give too much credence to pragmatic justifications over epistemic justifications. Surely we can’t define knowledge, or a theory of such in terms of accomplishing any old goal, can we?

    “Knowledge, as I have described it here, is simply a belief that has been validated by some knowledge-identifying rule.”

    Is this not circular? You seem to be saying “knowledge is that which a knowledge picker-outer picks out.”

    “What a theory about knowledge can, and should do, is create criteria that any rule purporting to separate good beliefs from bad should meet and show how, from the perspective of someone seeking knowledge, such rules should be chosen”

    I thought that a theory of knowledge was supposed to first and foremost describe what the difference is between “good” and “bad” beliefs. Once that is established, then we can move on to such criteria which seem to me to be more a part of a theory of rationality.

    Comment by Jeff G — October 26, 2006 @ 11:06 am

  2. a) No, I don’t think being pragmatic is a bad thing. You can build a lot on what is required to acheive any goal at all, since the best way to acheive goals to be “in tune” with reality.

    b) knowledge-intentifying rule = rule that picks out beliefs that are conductive to acheiving our goals (which means that they reflect reality). Nothing circular about that. I just gave it a meaningful name considering the context.

    c) As it turned out, given the arguments presented, that was a better description for knowledge-indentifying rules, and not a knowlegde theory. This is the same move that has been made in the philosophy of language with regards to a theory of meaning.

    Comment by Peter — October 26, 2006 @ 11:34 am

  3. Well, I don’t think being pragmatic is necessarily bad either, especially when pragmatic justification is in line with epistemic justification. But what about when the two are in conflict with one another? In fact, it may be argued that these cases are the very ones which we want a theory of knowledge/rationality to address most, but I don’t see your theory addressing such instances due to its not recognizing, for the most part, that they exist.

    Comment by Jeff G — October 26, 2006 @ 11:55 am

  4. That’s just begging the question, assuming that there is some extra epistemic justification that the theory doesn’t capture. You would have to come up with a specific example.

    Comment by Peter — October 26, 2006 @ 12:11 pm

  5. A typical failed counterexample goes something like this: I want to be happy; thinking that my dead dog is alive would make me happy; thus thinking my dead dog is alive is knowledge. But it isn’t, at least under the account given here. A knowledge-indentifying rule, turned on what would lead to happiness, might indeed pick out the statement “thinking that my dead dog is alive would make me happy”, making that statement knowlegde. However beliving that one’s dog is alive as a result of that wouldn’t be knowledge, that belief itself is not picked out by the rule.

    Comment by Peter — October 26, 2006 @ 12:27 pm

  6. Actually, the typical case I was thinking of is the man who is at bat in the 9th inning after having struck out 3 times during the day. The truth of the matter is that he probable won’t get a hit, but does that mean that he should believe it? After all, there that old saying “whether you think you can or you think you can’t, you’re probably right.”

    Comment by Jeff G — October 26, 2006 @ 2:53 pm

  7. And a knowledge theory as I have given it above would conclude: “If you want to maximize your chance of sucess you shoulc believe that you can do it”. How is this a problem? It doesn’t imply that the belief that “you can succeed” is knowledge.

    Comment by Peter — October 26, 2006 @ 3:40 pm

  8. But I thought that knowledge simply was belief which helped achieve our goals in the world? How is this clear example of belief distinguished from knowledge as you define it?

    Furthermore, what does your theory say about William James’ pragmatic argument for religion? He argues that religion consists of 2 propositions:

    1) The best things are the more eternal things.
    2) We are better off even now if we believe [proposition-1] to be true.

    Let’s pretend that (2) is actually true (a highly questionable assumption). Furthermore, let’s assume (rather safely in my opinion) that the evidence is the world is overwhelmingly against (1). Should we believe (1) or not? Can a religionist claim to “know” (1) to be true?

    Comment by Jeff G — October 26, 2006 @ 3:52 pm

  9. You forgot the: as picked out by rules part, which makes all the difference. You can get rules that yeild “believing X will aid in doing Y” based on evidence (“in the past believing X led to Y”) but no rule will deduce “X” by itself. For example rules such as “if you want to do Y believe that you can do Y” simply don’t work in the majority of cases (ex: flight, driving, calculating, ect). Secondly you are confusing acts with the beliefs that are to be evaluated. If I want to go to the store then the act of driving to the store is not something I need to evaluate as knowlegde or not, it is an act that is accomplishing my goal. Likewise, if my goal is to be happy then believing something delusionsal is an act that is accomplishing my goal, not something that needs to be evaluated in order to sucessfullly accomplish my goal, although before one chooses to believe something delusional the fact that “believing this delusional fact will make me happy” would need to be evaluated, a fact that I have pointed out twice above. This should answer your question regarding James’ argument, replace “something delusional” with 1); if you are choosing to believe it in order to be happy then it is an action to accomplish your goal, not something that has to be evaluated in order to act sucessfully to achieve your goal.

    Comment by Peter — October 26, 2006 @ 4:02 pm

RSS feed for comments on this post.

Blog at WordPress.com.

%d bloggers like this: