On Philosophy

October 31, 2006

Self-Consciousness and Consciousness

Filed under: Mind — Peter @ 1:23 am

Previously I have pointed out self-consciousness as one property of the mind that is essential for experience to be conscious. (see here) I still think that this is the case, but given some of the other theses about conscious that I hold, it is probably in need of some clarification. Specifically, the mind must be able to be self-conscious, but it doesn’t have to be self-conscious when it is having every experience; it is enough that in some future moment, related in the correct way to the past experience, it is self-conscious.

Of course if we believe that self-consciousness is usually present in some form or another, which seems quite reasonable, this particular distinction may seem irrelevant. I have only made it because there are some times in life were it may seem, on retrospect, that we have been consciousness without any self-consciousness, and additionally it highlights some of the peculiarities in the way we decide whether a moment was conscious, a point I have also brought up elsewhere.

An example of such a time, in which we might be conscious without having self-consciousness, is a period of time where we are completely absorbed in some activity (absorbed in the world), a time when we “lose ourselves” in the world. However, we can remember what we experienced at that time, and of course if we remember something we remember it as being conscious, and so when we reflect upon it later it seems most natural to say that our experiences at that time were conscious. Let us assume that situations such as this can really happen as I describe them, and it isn’t the case that there is some minimal self-consciousness present (which seems possible, but is not a point we need to press). Should we then give up self-consciousness as a requirement for consciousness, or should we give up our judgment that those remembered experiences were conscious?

We should do neither. The seeming dilemma that we are presented with here is caused only by a poor approach to the problem, and not by a real difficulty. Our approach is flawed in two ways. First it assumes that we can meaningfully ask whether some moments are conscious by themselves. My position on consciousness is that it is a property that can be said to apply to some temporally extended system. A single moment, or a small set of moments, then is simply not the kind of thing that can be properly said to conscious, outside the context of the rest of the temporally extended system, just as you can’t meaningfully ask of a part of a statue if that part is a statue. The second problem comes from our treatment of every moment as belonging to the same person, and thus leading us to conclude that if the experience is conscious at one moment it must have been conscious earlier as well, since it is the same person who it is or isn’t conscious for. (See my arguments against identity over time here.)

Of course this may seem like solving the same problem twice, and in conflicting ways. Approaching consciousness as a property of a temporally extended system tells us that there is no real problem because those moments without self-consciousness are properly part of a larger conscious system, which includes is self-consciousness at later moments. On the other hand, we can also attempt to solve the apparent problem by arguing that the experiences lacking self-consciousness are conscious to the person at the later time, who possesses self-consciousness, but not to the person at the earlier time, who doesn’t. And so it may seem like these approaches are pulling us in opposite directions. Of course these are both approaches that I advocate, and since I don’t have multiple personalities you can safely assume that there is some fundamental unity. The unity is this: when considering consciousness as a temporally extended system there are no rules about which moments begin the system and which moments end it. Of course you could consider the person’s entire life as one system, and this is what we are naturally inclined to do, but you could also take smaller portions of it and determine if they were conscious, independently of each other. So our second approach, arguing that the experience is conscious to the person at one moment, and not to the person at the earlier moment is like taking the two periods of the person’s life and determining if they are conscious systems independently. Of course you could argue that only taking the full extent of the person’s life as a conscious system is a valid way of examining the person, and that is probably fair in some ways, but one can’t recover identity over time from it, since if the person’s consciousness “branches” then there will be two temporally extended systems to study, which share in common some initial moments. Of course this doesn’t pose any problems when determining if they are conscious, since our definition of a conscious system is not based upon the notion of a temporally extended person.

Advertisements

October 30, 2006

Scraps: Homophonic T Sentences

Filed under: Language — Peter @ 12:00 am

Currently I am engaged in writing a critical response to the paper “Knowledge of Meaning and Theories of Truth” by Richard K. Larson and Gabriel Segal. It is quite an interesting paper, so interesting that I had more to say about it than I had space. However, I have no such constraints here. So what follows is an omitted section, responding to their claim that even a homophonic T sentence is informative. (A homophonic T sentence is one in which the object language and the metalanguage are the same language).

No matter what role they are to play in understanding meaning it is clear that the T sentences yielded by a T theory should provide us with some new information; they shouldn’t be trivial. Triviality would imply that the T sentences are things that could be known a priori, much like an assertion to the effect that a = a. Although such an assertion is true it isn’t informative about the objects themselves, and is something that we can know to be true without any knowledge of those objects. So if T sentences are trivial it isn’t possible that generating them via a T theory could be informative. Larson and Segal claim that the triviality of T sentences is just an illusion generated by homophonic T sentences, which appear trivial only because if understand the homophonic T sentence we already understand both the object language and the metalanguage, and hence there is nothing for them to inform us about. Clearly T sentences in which the metalanguage isn’t English don’t appear trivial, and so the triviality of the homophonic T sentences must be an illusion, and that they really do say something substantive.[1]

Or must they? Just because some statements in a form are non-trivial doesn’t mean all statements in that form are non-trivial. Consider equality. a = b isn’t trivial, but a = a is, it is not the case that because a = b isn’t trivial that a = a really says something substantive. Given this it seems perfectly reasonable to suppose that homophonic T sentences are trivial, even if their multi-linguistic counterparts aren’t. There are two conclusions we can draw from this. One is that T sentences only are informative about meaning when they act in a translational capacity, i.e. when the person possessing the T sentence yielding theory speaks only the metalanguage then they convey the meaning of the previously unknown object language. The other is to assume that the metalanguage of apparently homophonic T sentences is some kind of pure propositional content, which we express in English because we are unable to communicate it directly.[2] Thus possessing such T sentences would allow the person who knows them to go from sentences in a language to facts about the state of the world, and this might be entertained as a description of how meaning in a speaker’s primary language works. Neither interpretation would invalidate the remainder of Larson and Segal’s paper, so long as we remember that homophonic sentences are simply stand-ins for the real T theories.

Notes:

1. Specifically they claim “But it certainly does not follow from this that what is said by such a T sentence [a homophonic one] is not highly substantive.”

2. It sounds easy, but there may be a hidden problem with deciding that the metalanguage is pure propositional content; a representation of some state of affairs in the world. If it was the case that the metalanguage was pure propositional content then “is true” would have to be something that can be part of a state of affairs. However there is some reason to believe that the notion of truth is a linguistic construction. Which is not to say that truth doesn’t exist, only that it is something that can be predicated of language, and not part of the pure propositional content proper. We could adapt our theory to say that in the sentence “‘Q’ is true if and only if P” P alone is pure propositional content, and that the metalanguage is some hybrid of pure propositional content and something else. A bit ad hoc though.

October 29, 2006

Definitions II

Filed under: Language — Peter @ 1:14 am

It’s been a bit hectic around here, so instead of a “real” post I have created a concise list of definitions for terms used in the philosophy of language. I suspect most of my readers already know what they mean, but simply creating such a list means that I don’t have to define them every time I use them in the future, I can simply link here without sacrificing clarity. So without further ado:

Intension:
The intension of a word, roughly, can be thought of as what is intended by the use of a word. Obviously then intension is closely connected with sense (see below), one might even say that they were the same (I am so inclined), but it is not necessarily the case. At the very least words with the same sense have the same intension. Some, including myself, think that the intension of a word can be defined as the set of all objects that the speaker thinks that the word could reference, in all possible worlds. For example, for the word “cat” the intension would be the set of all cats in all possible worlds.

Extension
The extension of a word is commonly accepted to be the set of all objects (in this world) that the word references. For example, the extension of “cat” is the set of all cats. The classic example to demonstrate the difference between extension and intension is creature-with-a-heart and creature-with-a-kidney, which have the same extension, but not the same intension.

Sense
The sense is what determines the reference of a word (according to Frege). Of course in this context reference could be interpreted as the set that we identify with the extension, or the set we identify with the intension, or the reference, as defined below. What the sense is, exactly, is a matter of some debate. Some think that it is a mental process, which would imply that we could collapse sense and intension together (or ate least the mental acts that give a word its intension). Others, like Frege, think that the sense is itself some kind of abstract object that we enter into a cognitive relation with when we use language.

Reference
I define reference in a way that is a bit non-standard. Some authors use reference to mean essentially the set that defines the extension, or possibly the set that defines the intension. Either way the reference is objects in the world. Since we already have “extension” and “intension” I define reference slightly differently, as the object(s) in the world meant by the use of a word in some context. This won’t always be the same as the extension, although it will definitely be some subset of the extension. For example, if I use the word “lake” the reference is the lakes that exist on Earth, which are in some way connected to my use of that word. However there may be lakes on other worlds, lakes that fall under the extension of “lake” as I use it, but which aren’t the reference of my use of “lake”. Of course I don’t advocate making distinctions for their own sake, but this particular distinction is extremely useful in describing where some arguments for externalism go wrong (see here).

T theory / T sentences
A T theory is a set of rules that take sentences in some language and transform them into sentences such as “‘X’ is true if and only if Y”, which are called T sentences. Some, notably Davidson, claim that a T theory for a language captures the meaning of that language, and thus that T theories are theories of meaning.

Object Language / Metalanguage
In a T sentence such as “‘X’ is true if and only if Y” X is a sentence in the object language, while the whole statement is a sentence in the metalanguage. When T theories were originally developed by Tarski the object language was a formal mathematical language, but obviously this is not the case when applied to natural languages. The metalanguage, when dealing with natural languages may be a different natural language, or it may be “pure propositional content”, that is claims about the way the world is. Generally it is accepted that if the metalanguage and the object language are the same then the T sentence is uninformative.

Meaning / Content
I won’t even attempt to define meaning here (not that I haven’t put forward theories about it elsewhere). Currently there are basically two ways of looking about meaning. One is to identify it in some way with the intension or sense, and the other is to identify it with the reference or extension (for example, T theories seem to be identifying it with intension). Identifying meaning or content with the reference or extension seems tempting to many (especially externalists), but it does have some problems in dealing with the difference between statements such as a = a and a = b. If a and b have the same meaning, and the meaning is the reference or extension, then both statements are saying the same thing. They certainly don’t seem to be though, and thus the philosopher who wishes to identify meaning and content with the reference or extension has some work to do.

October 28, 2006

Concepts and Knowledge

Filed under: Mind — Peter @ 12:00 am

Today’s post contains an example that I find amusing, but which some may be offended by. But, if you are that easily offended why do you even have an internet connection?

Here I use the word concept to mean a category that groups labels our perceptions, the mechanism for which is unconscious. For example possessing the concept of “tree” allows one to see trees as trees, instead of simply some specific color patch. This is not quite the standard use of concept, which tends to include facts about the subject into what concept covers, and without keeping this distinction in mind what follows may not make complete sense. With that out of the way let me ask my question: does possessing a concept imply that we know something about that object or objects the concept is about that is unavailable so someone without the concept?

Imagine then someone who knows everything there is to know about homosexual people. However this person is unable to recognize homosexual people when they encounter them; they don’t have the “homosexual” concept (remember, as defined above). Let us say then that to remedy this deficiency they invent a “gaydar” (we have to assume that in this world there is some physiological characteristic that such a machine could detect, like a unique protein, which is not the case in real life). Possessing such a machine would give that person the appropriate concept “artificially”. But since they built the machine using their pre-existing knowledge about homosexual people the machine isn’t telling them anything new about homosexuality (although it does inform them about who is homosexual, but that knowledge is not required to understand homosexuality itself). We can also imagine this device somehow being incorporated into their mind, giving them the “homosexual” concept. But simply integrating the machine into their brain doesn’t add to their knowledge about homosexuality, and so we conclude that developing the concept of “homosexual” doesn’t inform the person possessing it about homosexuality. We could imagine implanting the “gaydar” into someone who knew nothing about homosexuality, and even after its implantation they would still be ignorant, and would have no idea what this new concept was picking out.

Concepts then are more like mental abilities than knowledge, a distinction which admittedly can be confusing at times. And this is not to say that a concept can’t result in us possessing some new knowledge, the important point is what that knowledge is about. Having a concept like “tree” allows us to know which objects in the world are trees, it doesn’t inform us about what trees are. Of course, like any normal concept, our knowledge about trees is easily confused with our tree concept, since when we see something as a tree, using our concept, we are also able to say why it is a tree, using our knowledge about trees. But this knowledge is not part of the concept or derived from it, although it likely had a role in the formation of that concept.

So why do we care? Well, as pointed out last time, what concepts tell us about is essential to unraveling the “problem” of Mary the color scientist. By hypothesis Mary knows all the physical facts about color perception there are to know, but still when she departs from her colorless room she learns more about qualia, showing that there is more to qualia than just the physical description. But does she really learn something more about the qualia? From within her room Mary could already have built a color-detector. She knows what frequencies of light are which colors. She also knows how her brain will react to each frequency. Thus she can build a device that monitors her brain state and tells her what color she is seeing. This is basically the same thing our “gaydar” builder did above, and we agreed that having such a device didn’t increase his knowledge about homosexuality. So, likewise, Mary could have implanted her color detector into her brain and know upon first leaving the room which colors were which. And thus we conclude that Mary didn’t really learn anything about color experience upon leaving the room, she just developed a new concept. She learned to tell which objects were red, which were green, ect, but she didn’t learn anything new about the experience of seeing color itself.

Of course some will object, saying that I have missed the point, that Mary learns what colors “feel” like to her upon leaving the room. But my contention is that the “feel” of a color is not a fact about qualia/color perception but a concept. Certainly there is no reason to flat out deny that the “feel” of a color is a concept, and there are independent reasons to believe it (besides the fact that our theory about qualia requires it). For one other concepts present themselves to us as similar “feelings”. When you look at a tree it simply “feels” like a tree, you don’t have to deduce it from other experiences. Of course, in contrast to colors, we can analyze our judgment that a certain perception is a perception of a tree in terms of color and shape perceptions, but in contrast those color and shape “feels” seem simple, even when reflecting on them. Of course, it may simply that they may be the most fundamental visual concepts which other concepts are based on, at least that is how I see it. In any case being primitive certainly doesn’t disqualify them from being concepts. A person with a “gaydar” in their brain won’t be able to explain their “feeling” that someone is homosexual, it will feel simple and unjustified, just like our color feels, but we know that it is really just a concept. Another reason to believe that color “feels” are concepts is that like concepts they are “universals”. A particular color “feel” is something that can be experienced of different objects at different times, which certainly suggests that it is one concept that is deployed to identity a certain class of visual inputs.

So, to reiterate my conclusion from last time, what Mary learns upon leaving her room is how to use a new concept. This concept might inform her about the color of objects, but it doesn’t give her more information about color experience itself. And thus there is no reason to believe that Mary can’t know everything there is to know about color experience from within her room. And finally from this we can conclude that there is no problem with defining qualia / experience in terms of physical properties, or at least no problems that arise because of the thought experiment with Mary.

October 27, 2006

Materialism And Qualia

Filed under: Mind — Peter @ 12:55 am

Materialism has made a great deal of progress in explaining the mind in terms of a completely physical process happening in the brain. The first person perspective, intending, representing, these were once all considered things that were unexplainable by materialism, but over time philosophers have shown how these aspects of the mind can be explained be purely physical events (although obviously complete descriptions have to be left to neuroscientists). However, there still remains one major explanatory hurdle for materialism: qualia. I am not one of the philosophers who think that qualia prove that the mind can’t be explained in materialist terms, there are many good reasons to believe that the mind must have such an explanation. But I wouldn’t deny that qualia, and the explanatory gap, are a problem that materialism needs to address.

I don’t think qualia are an in surmountable problem, as some do, because it seems like an open question whether a particular qualia are really some brain process (by open question I mean that it seems rational to ask how or why it is that brain process). All open questions show is that we are ignorant about some of the necessary details, making the truth of the identity seem obscure. For example, our ancestors may have thought that the identification of water with H2O was an open question (is water “really” H2O?), especially since water seems to have properties, such as infinite divisibility, that H2O does not. But eventually such skepticism about identity claims tends to disappear, especially when it can be shown that H2O really can play the role of “water” (it has all the same objective properties, although obviously not the same subjective properties, since at the very least they have different names).

Thus the fact that identifying qualia with some brain process feels like an open question really reveals that we simply haven’t shown that some brain process can play the same role as qualia, not that they are necessarily distinct.* And trying to remedy this problem illuminates some of the real problems that make qualia so hard to address under materialism. Simply attempting to define what a qualia is can be a serious problem. We can roughly describe a qualia as experience as given to us. For example “redness” is a kind of qualia. But this doesn’t actually pin down what qualia are, at least not enough to be able to say if something definitely is a qualia or not, it is simply suggestive. Imagine attempting to define “redness” to someone who has never seen red (without simply avoiding the question by saying “a kind of qualia” or “a kind of sensation”). The only way to convey what “redness” is seems to be by pointing out things that evoke the sensation of “redness” (the color or roses, ect). Or if we are more poetical we might attempt to describe how “redness” makes us feel in terms of associated sensations (it feels “angry” or “dangerous”).

But, if that is the role that the physical process must play, then there seems to be no problem in capturing it in materialist terms. What a specific qualia, like redness, could be, physically, is a combination of two things. One is a specific kind of visual input. We can conceive of the visual input, which is integrated into experience as visual sensation, as an array of numbers (as what is happening physically, not as how it is presented to us; it is presented to us as just a certain color, different from other colors). Redness then would be a distinct number. Additionally this number is combined with associations**, both of other objects that have evoked this particular visual input, and possibly with memories, feelings, ect that the mind has linked to this input for some reason. Many of these associations are unconscious (or at least at the periphery of awareness), but we can assume that when focusing on the specific sensation they are more likely to be brought to our attention. We can assume that thinking about red, instead of experiencing it, is simply a process of imagining red, in which the usual visual input in now being generated by imagination instead of perception. This physical process can play the same role of “redness” as described above. Redness, being some particular kind of visual input, is naturally the kind of thing you can’t explain in words directly, since it is part of your neural activity, and not something that can described directly. Instead we must appeal to the associations, which are generally publicly accessible objects, and by using labels such as “red”, which are really just shorthand for these associations.

Given this definition can we address the problem of Mary the color scientist? (See here for my description of the problem.) I think we can, because part of what defines individual the qualia are a specific kind of sensory input. Even if Mary learns what specific neurons and firing patterns are involved in this qualia she doesn’t possess the concept of red as we do, because when we think of red we generate red in our imaginations (simulating the experience of seeing red). But even learning what “redness” is doesn’t enable Mary to make the right patterns of neurons fire (the neurons that are the “red” visual input); that is just not an ability we have, to go from a description of the way the mind works to making our minds work in that way. So Mary “learns” something new about red when she first sees red, because now she is able to think about it in the same way we are, by imagining it. But she hasn’t learned anything new about the qualia of redness, she has simply acquired a new way of thinking about it.

So does the account given here really explain qualia? I am willing to entertain the possibility that there is something more that qualia do for us, some additional role they play in the mind that is not captured by the description that I have given here. The problem is identifying what that role is, so that we can determine what physical descriptions could play that same role (or if they could play the same role). And to be honest with you I simply can’t see what qualia are supposed to be beyond the description that I have given here. This is not to say that I am unaware of other descriptions of qualia, just that they are so vague that they don’t really inform us as to what properties something must have to be a qualia, and thus of little help in determining whether some physical process really is a qualia.

* Defining what role a specific concept plays is often essential to determining what it is exactly. For example, meaning, knowledge, ect.

** I have discussed in other places how our experience is saturated with concepts, when we see a tree we don’t see simply a visual impression, we see a visual impression as a tree, and this “seeing as” is part of the experience, not something we add to it by reflecting or thinking about it. These associations then should be simply treated as another example of this conceptual saturation.

Next Page »

Create a free website or blog at WordPress.com.