On Philosophy

June 13, 2006

Hilary Putnam on Why We Aren’t Brains in a Vat

Filed under: Intentionality,Language — Peter @ 1:41 am

Another long title, I know. This post will be the first in which the arguments I present aren’t my own, but rather a slight improvement on Putnam’s argument in Reason, Truth, and History (amazon). The goal of the argument is to show that we cannot meaningfully talk about, or even speculate about, being brains in a vat, or more generally that reality is somehow an illusion. And if we can’t speak meaningfully about it why bother considering it at all?

The crucial premise in this argument is that to be able to refer meaningfully to something we must have had some perception of the thing we are referring to. Somehow (depending on your theory of intentionality) it is the perception that has generated the meaning. For example since I have seen the tree in front of my house I can meaningfully refer to it. Now let us say that you have a tree in front of your house, which I have never seen, but that you have told me about. If I talk about the tree in front of your house the meaning of my words does not come from the tree but from your description of the tree. It is even possible that there is no tree at all in front of your house; even so my talk about it has meaning, because the meaning was derived from the description I had heard and not the tree itself; in actuality the reference of my words was not the real tree but the “tree” that was created by your description. I know this sounds vague, but since I am trying to be theory-of-intentionality-neutral you will have to fill in the details based on whatever particular theory you subscribe to.

Now suppose that there are people who are physically brains in a vat but who are living in a simulated world. All they can refer to, and think about, meaningfully are entities within their simulated world. (And possibly abstract entities as well, depending on the theory of intentionality again.) When we quote what people in this situation might say let us then follow their words with *s to show that meaning behind their words is different from ours. For example if we speak of cats we mean collections of atoms that tend to lay in the sun; when people in the simulated world think of cats* they are referring to some aspect of the program that provides them with certain visual stimuli, and not a collection of atoms. Thus if they speak about brains* and vats* they aren’t speaking of physical brains and vats but objects, possibly imagined, within the simulation itself. They cannot speak of brains or vats (without *s), but it would be these words that could give their hypothesis meaning, because clearly they are not brains* in vats*, after all they have arms* and legs*. Thus neither we nor our hypothetical people in a simulated world can talk meaningfully about ourselves possibly being brains in a vat.

Even if they cannot meaningfully talk about the external reality, which they have no access to, me might still think that it could be meaningful for someone in their position to deny the reality of the world. Upon reflection on what such a denial entails however it becomes apparent that this claim too is meaningless. Consider for example the denial of the reality of a hallucination. The person suffering from the hallucination claims that it is “not real” because no one else is able to perceive it (although it was a real hallucination). However in the case of a simulated world this is not a possible use of the claim of unreality, since other people share the same “unreal” world. Perhaps then they mean that it is like a hologram. However when we deny the reality of a hologram we do not deny that it is a real hologram, only that it is not what it appears to be. To make a denial of reality in this fashion however requires that the speaker be able to say what it really is (it is really a hologram), and once again the people in the simulated world cannot meaningfully refer to their experience of a physical universe as being something else, since they have no experience of what that something else could be.

What is left that a person in such a simulation could meaningfully claim? We might think that even if they couldn’t deny their reality they might be able to meaningfully insist that there were some other aspects of reality, inaccessible to them, that were really the cause of their experiences. In this case the person is not denying the reality of the simulation, simply saying that there may be more to reality than is perceived. However once again we run up against the problem that if this extra reality can’t be observed to have a casual effect then there is no way that person could meaningfully talk about it, and in fact could have no reason to believe that it even exists.

Thus the only meaningful claim we can make along these lines is something to the effect that “there may be more to reality than what we have observed so far, although I can say not what”, and this claim is so empty that we might as well simply not make it at all.

Why then does the claim “we are all really brains in vats” seem meaningful? It is because we are confusing objects that we can talk meaningfully about with objects in some hyper-reality (more real than what we are perceiving), objects about which we have no information whatsoever. Really saying “we are all brains in vats” is just as meaningful as saying “we are all akhaf in uyawer”; since we have no way of knowing what the hyper-reality is like why use the same words for it? Yes we could imagine real brains in vats, but then those people, living in a simulated world would have no way of knowing what our reality was like, and thus could not meaningfully form a hypothesis about it. Of course all bets are off if you let information from one reality leak into the other, but since there is no evidence that happens in our “real” world we should be satisfied that we are not akhaf in uyawer.

Blog at WordPress.com.