On Philosophy

July 11, 2007

Cognitive Dissonance, Confirmation Bias, And Philosophy

Filed under: Metaphilosophy — Peter @ 12:00 am

Let me state my bias up front: I think philosophy is in serious trouble, and I think we (philosophers) should do something about it. This is not to say I dislike current philosophical theories; many of them seem promising to me. And I certainly like trying to solve philosophical problems. My problem is not with the discipline, but with the method. So if you like philosophy as it is then you aren’t going to like what I have to say. But I think it is important to at least think about the problems I bring up here, because what we know scientifically about human rationality implies that our philosophical reasoning is going to be flawed in ways that we cannot, in principle, be aware of or properly compensate for. We cannot introspect and be sure that we are not falling victim to cognitive dissonance or confirmation bias. And that is a problem for philosophy because as we currently approach philosophical problems it is only our rationality that leads us to favor one position over another.

There are two primary problems that the existence of cognitive dissonance and confirmation bias pose for philosophy in its current form:

1: We unconsciously revise our beliefs so as to preserve the positions that we are most committed to. This unconscious revision can include the introduction of new and unjustified beliefs that lend support to the threatened belief or resolve the problems facing it. And we will be strongly committed to beliefs introduced in this way in virtue of their preserving a belief that we are already strongly committed to. The classic example of this effect is cult members who, upon the doomsday predictions of their cult not coming to pass, adopt elaborate rationalizations of why the prediction turned out to be false, and who are completely convinced of the reasonableness of these rationalizations. It can also be seen in effect when our first reaction to hearing something contrary to a position we hold is to look for a counterargument, rather than entertain the idea that we might be wrong.

2: The more strongly we are committed to a position or argument the less we will focus on evidence against it (times when it falls short, arguments against it), and we will give such evidence less weight; at the same time we will focus more on evidence supporting it, and give that evidence more weight. And, as with 1, this is an unconscious effect. Which means that every philosopher will feel justified in thinking that the positions they hold are the right ones; every position has arguments for it, and there is no way to decisively reject a position, which means that a philosopher is always free to unconsciously pick and choose what they want to give weight to such that the positions that they are committed to come out on top.

Now by themselves these two problems are worrisome, but we might hope that we are most strongly committed to the best theories in general (this may very well be true for scientists given their motivations) and so that while the unconscious effects described in 1 and 2 may be occasionally a hindrance they may not be devastating to philosophy. But that is an overly optimistic assessment. There are three major reasons that we become committed to positions that have nothing to do with the truth, usefulness, or explanatory power of that position.

3: If we defend a position or argue in its favor we will be more strongly committed to it, and if that argument is made public it will increase that commitment. And of course by its very nature philosophy encourages us to make public our thoughts on philosophical issues, especially in the case of the professional philosopher who is very strongly motivated to write papers in order to further their career.

Corollary: the more arguments you make against a philosophical position held by someone the more strongly they will hold that position (given that 1 motivates them to defend the position against each argument you make). Which means that the back and forth nature of philosophical discourse only aggravates the problem. Instead of the discourse moving both parties closer towards the truth it just tends to radicalize them. Additionally a series of papers for and against a position only serves to worsen the problems created by 2, since it gives a philosopher who already favors one side in the debate many more reasons to think that they are correct, given that they will unconsciously tend to ignore the many corresponding arguments against that position.

4: Since philosophers generally think of themselves as rational, and highly value rationality, so philosophers will generally be most strongly committed to the positions they hold for fundamentally (originally) non-rational reasons. Of course we all begin doing philosophy with lots of ideas that we have just “picked up”. But few of us try to get rid of all the unjustified beliefs we have at the beginning of our philosophical studies (and it is doubtful that we could even if we wanted to), and these leftover positions from our pre-philosophical days acquire more weight later in it, since when we find ourselves believing them we convince ourselves that we have good reason to believe them. And, not wanting to admit to ourselves that we are irrational, these may become our most fiercely defended beliefs, because it is much easier to give up a belief we really do believe for rational reasons, if we come to recognize those rational reasons as unjustified.

5: In general the positions/arguments that we spent the most effort to understand will seem like the best positions/arguments, and we will be the most committed to them. I don’t want to point any fingers regarding this issue; of course some of you will assume I mean continental philosophy given my views on it expressed elsewhere. It is true that I do think this is a problem for at least some continental philosophy (specifically the worst bits and the most obscure authors), but there are a good number of analytic positions, which I will leave nameless, that seem to suffer from the same problem. This effect also explains the fact that everyone who spends a great deal of time studying any particular philosopher or system (Wittgenstein, Kant, Heidegger, ect) comes away with the conclusion that they were right, that they were on the right track, or that there is something really important to learn from them. No one who spends a long time studying a philosopher or system ever goes on to defend the idea that they should be ignored.* It’s possible even that philosophy as a whole suffers from this problem; we spend so much time learning to do philosophy well that it becomes unthinkable to question it, lest all that time have been spent in vain.

Of course none of this is a critique of the truth of any particular philosophical position. It is fully possible for a person to be suffering from the strongest expression of cognitive dissonance and still be right. But given that there is no general agreement in philosophy it seems reasonable to worry that cognitive dissonance and confirmation bias may be having a detrimental effect. And it certainly should provoke some self-doubt. I certainly don’t want the positions I hold just to be the ones that make me happy or the ones that cohere best with my existing biases. I want the positions that I hold to be the correct positions. And because the effects described here are unconscious I can’t be sure that they are. In fact I can be pretty sure that some, possibly most, of the positions I hold I hold at least in part because of these psychological effects. And as philosophy stands now I can only hope that they happen to be true despite their less than perfect motivation, and this makes me dissatisfied with philosophy in its current form.

Of course there are ways to overcome these effects. But simply recognizing that they exist is not enough. For starters we don’t know how strongly our opinions are being affected by them, and hence any attempt at compensation will simply be a shot in the dark, and may even result in holding positions that have less rational justification than the ones we had before (maybe we were right but for the wrong reasons). Additionally cognitive dissonance is not a constant effect, although everyone’s reasoning is affected by cognitive dissonance, sometimes we are affected by it less strongly than others. But, unfortunately, we can’t tell when those times are; in fact we always think we are free from cognitive dissonance, especially when we are being most strongly affected by it. In general when people are confronted by existence of cognitive dissonance they deny that it has any effect on them, they proceed to describe in full detail how rational they are, how they appropriately weight evidence, and how they have changed their opinions on numerous occasions because of the demands of rationality. But everyone says that. So just because you are convinced that you have evidence that you are free from cognitive dissonance, or at least able to free yourself of it when you try really hard to be rational, doesn’t mean that you are actually free from cognitive dissonance. What it means is that you want to be rational, and so cognitive dissonance kicks in and prevents you from realizing that you are being affected by cognitive dissonance. I am certainly affected by cognitive dissonance. In fact I am sure it is affecting my judgment on this very issue. But I don’t know how or where it is affecting my judgment, nor will I be convinced that it is even when someone points it out to me, so I cannot compensate for it.

The remedy in other disciplines is to rely on objective standards. Which doesn’t necessarily mean making some kind of measurement, but it does mean that the discipline must ultimately rest on something that can’t be rationalized away. In the sciences of course we have measurements. In math there is the proof. And because of this objective standard it doesn’t matter how much you want something to be true or want it to be false; if there is a correct proof then there is a proof, no amount of rationalizing can change that fact. Psychology is a great example of how this can make a difference in a discipline. In its early days psychology was a lot like philosophy; psychologists judged theories by how coherent they were and how well they were argued for. And this led to a lot of bad psychology, such as Freudian psychoanalysis. And of course the psychologists in this era were unable to tell that they were doing bad psychology because of cognitive dissonance and confirmation bias, especially because of points 3 and 5. After spending so long learning how to do psychoanalysis and defending it psychologists unconsciously convinced themselves that it worked. And without any objective standard there was no reason for them to stop believing that it worked, and failures were easily rationalized as a misapplication of psychoanalysis. Eventually however psychology shifted to rely more on experiments, and while psychologists have yet to come up with theories as comprehensive as those in the days of psychoanalysis they have discovered a number of interesting and useful psychological facts, such as the existence of cognitive dissonance and confirmation bias.

I don’t know what objective standard we can find for philosophy, but I am convinced that we need one, not necessarily to make big changes, but to ensure that we aren’t doing the philosophical equivalent of Freudian psychoanalysis. And that objective standard is certainly not the valid argument (corresponding to the proof in mathematics), because as any familiarity with philosophical journals reveals getting a consensus as to which arguments work and which don’t, or changing someone’s mind about whether their argument works, is impossible.

* One alternate explanation is that the test of time has left only the best philosophers and systems still standing. But I doubt that, because these philosophers and systems were studied intensely and with a large following even in their own time (or at least shortly thereafter). And note also that these philosophers also tend to be the most obscure writers and the hardest to understand, either in the way they write or in the content of their ideas (with, of course, a few notable exceptions). Of course it might also be because the people who choose to study a particular philosopher or system are self-selecting; only philosophers who already agree to some extent choose to spend so much time studying them. That is a better explanation, but it still isn’t complete. What it overlooks is that philosophers don’t become an expert on such a topic overnight. Generally it happens without a conscious intention; they write one paper on the topic, after which it is easier to write a follow up paper, and so on, until they find themselves becoming experts without intending to become experts. Which means that they aren’t self-selecting enough to explain why so few experts completely disagree with the philosopher or system they are an expert about.

Advertisements

4 Comments

  1. This could be turned into an entire book… something along the lines of Ellul’s Technological Society, only where the philosophical paradigm becomes autonomous.

    This problem is compounded to its extremity in the academic world. Academic journals and university programs encourage the construction of massive bodies of thinkers who all agree on a particular paradigm, have their degrees in that paradigm, and have their professional careers buried in it. If one day we find error in that paradigm, there is too much at stake to go back on it.

    Comment by Matthew D. — July 11, 2007 @ 7:04 am

  2. Some very interesting points; I hope you continue to develop them.
    My own training was in the physical sciences, where criteria are typically fairly clear. My spouse is a mathematical psychologist, and has always pushed her students to establish *very* clear operational definitions, without which they are simply churning words.
    I rather like Quine’s idea of epistemology as a subset of cognitive psychology – it would keep us aware of some of our epistemic limitations, our hard-wired prejudices, and of the need for clear operational definitions if we are to settle arguments in a meaningful way.

    Comment by carey — July 11, 2007 @ 8:11 am

  3. A good post!

    http://www.ou.edu/ouphil/faculty/chris/crmscreen.pdf is very good manual (by a co-editor of SEP) for minimalisation of our biases.

    Ad ad hoc hypotheses and rationalization: my http://agorametaphysica.blogspot.com/2006/07/quotes-and-notes-on-liar-paradox.html contains some hints that and ad hoc hypothesis still can be the most rational hypothesis.

    I do not believe there is some new objective standard in philosophy. See Bill Vallicella’s posts and subsequent comments on dissent in philosophy and in/tractability of philosophical problems:
    http://maverickphilosopher.powerblogs.com/posts/1182222365.shtml
    http://maverickphilosopher.powerblogs.com/posts/1167847646.shtml

    Finally, Peter advises to “get rid of all our unjustified beliefs”. A nice discussion of this can be found at http://www.iap.li/oldversion/site/research/Back_to_Things_Themeselves/Back_to_Things_In_Themeselves.pdf , check out how it treats the concepts of epoche (withholding belief).

    Sorry for the bare links. “Philosophy’s long, life short” (Plantinga).

    Comment by Vlastimil Vohánka — July 12, 2007 @ 3:12 am

  4. I should have said: I don’t believe in a new objective standard FOR philosophy. That is, I don’t believe anything like this will ever be found. It’s forever about dissent about arguments. I still do hope for reaching knowledge of clear and rigorous solutions of some philosophical problems, but I do not expect consensus about such solutions.

    Comment by Vlastimil Vohánka — July 12, 2007 @ 3:27 am


RSS feed for comments on this post.

Create a free website or blog at WordPress.com.

%d bloggers like this: