On Philosophy

October 28, 2006

Concepts and Knowledge

Filed under: Mind — Peter @ 12:00 am

Today’s post contains an example that I find amusing, but which some may be offended by. But, if you are that easily offended why do you even have an internet connection?

Here I use the word concept to mean a category that groups labels our perceptions, the mechanism for which is unconscious. For example possessing the concept of “tree” allows one to see trees as trees, instead of simply some specific color patch. This is not quite the standard use of concept, which tends to include facts about the subject into what concept covers, and without keeping this distinction in mind what follows may not make complete sense. With that out of the way let me ask my question: does possessing a concept imply that we know something about that object or objects the concept is about that is unavailable so someone without the concept?

Imagine then someone who knows everything there is to know about homosexual people. However this person is unable to recognize homosexual people when they encounter them; they don’t have the “homosexual” concept (remember, as defined above). Let us say then that to remedy this deficiency they invent a “gaydar” (we have to assume that in this world there is some physiological characteristic that such a machine could detect, like a unique protein, which is not the case in real life). Possessing such a machine would give that person the appropriate concept “artificially”. But since they built the machine using their pre-existing knowledge about homosexual people the machine isn’t telling them anything new about homosexuality (although it does inform them about who is homosexual, but that knowledge is not required to understand homosexuality itself). We can also imagine this device somehow being incorporated into their mind, giving them the “homosexual” concept. But simply integrating the machine into their brain doesn’t add to their knowledge about homosexuality, and so we conclude that developing the concept of “homosexual” doesn’t inform the person possessing it about homosexuality. We could imagine implanting the “gaydar” into someone who knew nothing about homosexuality, and even after its implantation they would still be ignorant, and would have no idea what this new concept was picking out.

Concepts then are more like mental abilities than knowledge, a distinction which admittedly can be confusing at times. And this is not to say that a concept can’t result in us possessing some new knowledge, the important point is what that knowledge is about. Having a concept like “tree” allows us to know which objects in the world are trees, it doesn’t inform us about what trees are. Of course, like any normal concept, our knowledge about trees is easily confused with our tree concept, since when we see something as a tree, using our concept, we are also able to say why it is a tree, using our knowledge about trees. But this knowledge is not part of the concept or derived from it, although it likely had a role in the formation of that concept.

So why do we care? Well, as pointed out last time, what concepts tell us about is essential to unraveling the “problem” of Mary the color scientist. By hypothesis Mary knows all the physical facts about color perception there are to know, but still when she departs from her colorless room she learns more about qualia, showing that there is more to qualia than just the physical description. But does she really learn something more about the qualia? From within her room Mary could already have built a color-detector. She knows what frequencies of light are which colors. She also knows how her brain will react to each frequency. Thus she can build a device that monitors her brain state and tells her what color she is seeing. This is basically the same thing our “gaydar” builder did above, and we agreed that having such a device didn’t increase his knowledge about homosexuality. So, likewise, Mary could have implanted her color detector into her brain and know upon first leaving the room which colors were which. And thus we conclude that Mary didn’t really learn anything about color experience upon leaving the room, she just developed a new concept. She learned to tell which objects were red, which were green, ect, but she didn’t learn anything new about the experience of seeing color itself.

Of course some will object, saying that I have missed the point, that Mary learns what colors “feel” like to her upon leaving the room. But my contention is that the “feel” of a color is not a fact about qualia/color perception but a concept. Certainly there is no reason to flat out deny that the “feel” of a color is a concept, and there are independent reasons to believe it (besides the fact that our theory about qualia requires it). For one other concepts present themselves to us as similar “feelings”. When you look at a tree it simply “feels” like a tree, you don’t have to deduce it from other experiences. Of course, in contrast to colors, we can analyze our judgment that a certain perception is a perception of a tree in terms of color and shape perceptions, but in contrast those color and shape “feels” seem simple, even when reflecting on them. Of course, it may simply that they may be the most fundamental visual concepts which other concepts are based on, at least that is how I see it. In any case being primitive certainly doesn’t disqualify them from being concepts. A person with a “gaydar” in their brain won’t be able to explain their “feeling” that someone is homosexual, it will feel simple and unjustified, just like our color feels, but we know that it is really just a concept. Another reason to believe that color “feels” are concepts is that like concepts they are “universals”. A particular color “feel” is something that can be experienced of different objects at different times, which certainly suggests that it is one concept that is deployed to identity a certain class of visual inputs.

So, to reiterate my conclusion from last time, what Mary learns upon leaving her room is how to use a new concept. This concept might inform her about the color of objects, but it doesn’t give her more information about color experience itself. And thus there is no reason to believe that Mary can’t know everything there is to know about color experience from within her room. And finally from this we can conclude that there is no problem with defining qualia / experience in terms of physical properties, or at least no problems that arise because of the thought experiment with Mary.

Advertisements

16 Comments

  1. “She also knows how her brain will react to each frequency.”

    Again, she knows what parts of her brain are affected, but she does not know what being affected in that way “feels like.” These are two different things to have knowledge of!

    Comment by Carl — October 28, 2006 @ 4:07 am

  2. …Having actually read the end of the post now (heh), I’m willing to grant some of your terminology. Now, she knows how to use the concept of red, or has acquired knowledge of the concept red. However, I’m not sure how useful the distinction is. Red would still feel like something, whether she could identify it again later or not. That queasy feeling one gets in middle school during one’s first crush is not necessarily the same feeling as one has during other things we throw under the general concept of “love,” but it is still a distinct thing which can be felt, even if it is unclear to the experiencer what it is that’s being experienced.

    Comment by Carl — October 28, 2006 @ 4:13 am

  3. You are confusing the “feeling” of love with our ordinary concept of love. Our ordinary concept of love is much more complex then the kind of concepts I am describing here. The quasy feeling itself would be a concept, but it is not our concept of love. Love is not a primitive/simple concept, just like tree is not a primitive/simple concept.

    Comment by Peter — October 28, 2006 @ 11:53 am

  4. A number of points with which I still do not agree:

    1) I find your use of the word “concept” to be very peculiar. I don’t think that Mary had a concept of red which was anything like the concept which we have of red, primarily because at the heart of our concept of red is redness, something which she is not at all familiar with. When she is released, she is not simply learning how to use the same concept differently, but rather she is radically altering the concept itself. This is especially the case given your tendency toward pragmatism wherein there is no line between the content and use of a concept. An illustration of this would be the fact that almost nobody knows much of anything about the concept of redness, as you define it, and thus there is a complete disjunction between our normal concept of “red” and that of Mary.

    2) Knowing how to identify a color by way of a machine cannot be equated with knowing how to recognize a color for oneself. It is for this reason that when the machine does identify redness, Mary learns “Oh, so that is red!” for herself. In other words, I don’t see how her learning that “that” is red could ever be considered anything other than learning something new.

    3) While you have argued against a strong form of property dualism which I was clearly not advocating, you have not argued against the existence of private, yet physical properties. While Mary certainly could have learned all the publicly available physical information surrounding redness and our perception of it, I have still seen no reason to assume that she had access to any of the private physical information which characterizes qualia.

    Comment by Jeff G — October 29, 2006 @ 11:10 am

  5. 1) I had to call it something, and there are some parallels. I don’t see why you are bothered by it since I pointed that fact out quite specifically.

    2) Something new about the object, not about her experience. This is what the “gaydar” example shows. No materialist has a problem assuming that Mary might learn something new about which objects have which colors, assuming that she didn’t look up that information beforehand.

    3) Private by accident, not private as an essential feature.. I see no reason why it couldn’t be public with the right technology. I don’t see why you think the fact that some properties happen to be private is a kind of property dualism. Some facts are currently private and some are public. Some are true and some are false, some objects are red and some are not-red. Just because we can draw a distinction doesn’t mean we are property dualists.

    Comment by Peter — October 29, 2006 @ 12:55 pm

  6. Here is what I see as an insuperable obstacle to your idea that the right technology can make qualia public:

    Can the right technology ever allow us to experience what it is like to be a bat using echo-location? Could we ever see what it is like for a bee to see the “color” of ultraviolet light? Could we ever experience what it is like to navigate by way of electric fields as many deep sea fish do? I see no way in which technology could ever give us these experience or qualia.

    If qualia really are potentially public, then it shouldn’t matter that we are human should it?

    Comment by Jeff G — October 30, 2006 @ 6:54 pm

  7. implanted memories. So yes.

    Comment by Peter — October 30, 2006 @ 11:12 pm

  8. Wow. I have to confess that you have some serious faith in the potentials of technology. While I’m not at present prepared to defend my assertion, I simply do not think that humans have the necessary neural equipment to experience such qualia.

    Comment by Jeff G — October 31, 2006 @ 12:44 am

  9. Maybe not at the moment, but there seems to be no fundamental obstacle to putting one’s consciousness into a computer, which could then be reconfigured as you wish. Obviously I reason that we can do such things because I am a materialist, but such a position is a consequence of my approach, since we don’t have evidence either way. To simply assume it can or can’t be done as the basis for an argument is to beg the question.

    Comment by Peter — October 31, 2006 @ 12:47 am

  10. FYI,

    Adam over at Aspiring Lemming has posted on the exact topic which we are discussing:

    http://desertlemming.blogspot.com/2006/09/holy-crap-im-dualist.html

    Comment by Jeff G — October 31, 2006 @ 12:49 pm

  11. Well, Jeff G’s brought me into this discussion, so I suppose I should contribute something to it. Let me start out by saying that my post is about a slightly different “consciousness” argument, so don’t expect me to settle this debate.

    I initially thought that what you were calling a concept was what psycholinguists (and linguists, i suppose) call a lemma. Don’t worry, it’s not part of a complex Logical sub-argument; it’s merely the non-verbal, pre-linguistic (in the sense that it exists above the lexicon in our language-processing architecture) entities from which words get their content. Locke might have called these things “ideas”, and someone might use the term ‘understanding’, but as I continued to read, it turns out that you’re not talking about lemmas.

    I take issue with a few things that your post suggests, particularly that concepts are more like abilities than knowledge, but since you admit that you’re using ‘concept’ in a (very!) unorthodox manner I won’t harp on it… other than to suggest that if you’re using a word in a way that pretty much no one else uses it, you might as well just make up a new word for your purpose. Goes a long way towards avoiding confusion and redundant argumentation. when i think of it, i’ll use concept* to denote your unique notion and usage.

    I’ll also gloss over the fact that homosexuality is a practice, not a natural kind nor a separate species of creature, so we shouldn’t expect our concept of homosexuality to help us pick out homosexuals any more than we should expect our concept of golf to help us pick out golfers.

    This all just seems to me to be another reason to deny that we should understand ‘concept’ as an ability to recognize, rather than a mental entity or family of associated ideas. I will grant that having a concept of ‘tree’ is NECESSARY for recognizing something as a tree, though it’s certainly not necessary for seeing a tree; nor is it SUFFICIENT for recognizing something as a tree, since recognizing a tree also requires properly functioning perceptual abilities inter alia. You’re right that it’s easy to confuse “having a concept” and “having the ability to apply a concept”, but i’m not sure that you’ve managed to help clear up that confusion.

    Your re-iterated conclusion is that “Mary learns upon leaving her room how to use a new concept. This concept might inform her about the color of objects, but it doesn’t give her more information about color experience itself.” But your notion of concept just is the ability to see things for what they are, so it would seem that you’re denying that Mary has the concept (by your use) of “red” when she’s in the room… at least UNTIL she somehow manages to build a device that notifies her of when she’s in the presence of and experiencing red. Is that right?

    Ignoring the feasibility of the technology, so long as Mary is relying upon the device to tell her when she is experiencing red, it doesn’t seem to me that she’s learned a new concept at all(in either the standard use or yours). That is, she’s not learned how to recognize when she is having a red-like experience; rather, she’s learned how to recognize that a device is recognizing that she is experiencing redness. It’s not until she’s able to recognize, independent of the device, that she’s having an experience of redness that she can be said to have learned how to apply her concept ‘red’ (which, i guess, is a whole new concept* for you).

    I think it’s important to note that such a device would not be monitoring Mary’s phenomenal qualia, only her brain states; and this is where the materialist still must justify his claim that the brain states JUST ARE the phenomenal qualia. To do otherwise is to beg the question at hand… I think (it’s getting late, and my halloween sugar-high has worn off).

    Quickly, in regard to whether or not technology will ever get to the point at which one could “upload” their consciousness into a computer, I’m skeptical. My skepticism is largely due to a biological chauvinism, which resembles the intuition that having mental states in some way requires “grey matter”. I don’t think silicon or some other digital medium can achieve the same kind of consciousness that humans experience, regardless of whether or not it can simulate the functions of neural components down to the microscopic level. A virtual tornado can’t plow through a real (i.e., “non-virtual”) trailer park, and I doubt one ever will; likewise, a virtual brain doesn’t seem to me to be made of the kind of stuff that’s necessary for processing perceptual inputs in the right way (i.e., in the way that humans process and experience them). I don’t have a great, or even a good argument for my skepticism, which is why I’m stuck calling it chauvinism.

    Comment by Adam — November 1, 2006 @ 12:10 am

  12. Concept was simply the first word that came to mind. In later discussion about the ideas presented here I briefly considered content, but decided that was also too loaded, moved to recongitional ability, which was still off. At the moment I am calling them recognitional concepts, and identifying the “feel” with the exercise of a recognitional concept. As for homosexuality I know it is a practice, although you can divide people up in groups by dispositions or activities if you so choose. Anyways the concept would be homosexual person, not homosexuality. If you understand the concept as homosexuality you will be mislead about my claims, a point that perhaps I didn’t make clear.

    “so it would seem that you’re denying that Mary has the concept (by your use) of “red” when she’s in the room”
    yes, exactly

    “so long as Mary is relying upon the device to tell her when she is experiencing red, it doesn’t seem to me that she’s learned a new concept at all”
    that is also a good reading of it, since the concept* as I use it here is a mental ability.

    But after this is where you miss my point. My point is that using the recognitional concept of redness doesn’t inform you about your experience. Mary has learned a new ability, but not a new fact about experience, or so I claim. And thus materialists don’t have to worry that they can’t explain qualia. All they need to explain is how recognitional concepts work, they don’t need to give the researcher that recognitional concept.

    The device thought experiment was simply to show that the knowledge that something is “red” or “green” is also availiable to Mary without having the recognitional concept. Most people don’t consider this part of qualia but one of the objections that Jeff raised to the previous post was essentially that Mary couldn’t know which color was which. So I showed that she could build a device to idenitify them, thus demonstrating that there is no new knowledge to be gained there either.

    Oh and as for your belief that consciousness can only occur in a biological brain … well such a theory is, in my opinion, fatally flawed because it can’t hope to overcome the problem of other minds, since a properly designed computer could behave the same way AND have the same internal states (which is how non-behaviorists often resolve the problem of other minds). And anyways one can’t raise it as an objection, properly speaking, because we don’t have any evidence whether it can or can’t be done. So to lean on it, either way, would be to beg the question.

    Comment by Peter — November 1, 2006 @ 12:59 am

  13. So, it seems to me that your story can be minimally modified such that mary could have a recognitional concept of red even though she were blind. If we tell the story such that mary has a condition wherein the signals from her optic nerve are blocked before reaching the first stage in visual processing–call it V1, like the cogsci-ers like to do–(this, by the way, would be a great explanation of why Mary is so damned obsessed with color-perception theory!), and if we tweak your device such that it picks up the signals from the optic nerve prior to the blockage, it would seem that all your requirements for concept possession could be met without mary ever having seen a red object. Sure, she would have all the theoretical knowledge, and she would have a reliable means of determining when red light has struck her retina (since the device beeps “red” whenever this occurs), so we should attribute to mary the mental ability to pick out instances of red. We could even equip her with a laser pointer, configurated so as to coordinate with the detection device of course, such that she’d be able to point to red objects.

    But… Mary’s still never experienced redness. She would still learn something new about “what it’s like to SEE red.” Knowledge that something is “red” or “green” was available to Mary before she left the room; she knew that roses were red and that grass was green, for instance. But the power of the Mary thought experiment is supposed to be that she comes to learn something new in the “So-this-is-what-red-looks-like” moment. personally, i don’t buy it, but for different reasons than those you put forth, peter.

    ——————————

    wait… so far as i can tell, my intuition that the kind of consciousness experienced by humans requires gray matter doesn’t rule out the possibility of other minds in humans, only the problem of other minds in computers. if you can make more clear to me why my chauvanism entails such a thing, i’d be grateful.

    your premise that “a properly designed computer could behave the same way AND have the same internal states” is the very issue at question and which I am arguing against, which means that simply assuming it as a counter-example to my argument is question-begging on your part. I think I’ve given a reason for reconsidering the possibility, and it seems that you have the onus of showing why my reason doesn’t hold water and doing so without already assuming that computers are capable of human-like consciousness. You have to prove to me that computers in fact have (or can have or will have) the same kind of internal states as humans, or at least that there is independant reason to believe that they do (or can or will), in order to undermine my conclusion.

    (an aside: not only isn’t it clear to me that the problem of other minds is an obvious consequence of my theory, it isn’t at all clear how it’s a problem unique to my theory; as far as i know, it’s (a) a separate problem, and (b) a problem for just about everyone…including eliminative materialists; as such, it doesn’t really carry much weight as an objection to my proposal/intuition).

    Also, for the record, I don’t see how my expressing doubt about the realizibility of human consciousness via artificial materials is begging the question. My argument doesn’t utilize “AI will never achieve human-like consciousness” to support the conclusion that “Therefore, AI will never achieve human-like consciousness.” Rather, what I suggest is that, for certain processes and objects, the materials are essential (e.g., a foam cake without eggs and flour just ISN’T a foam cake); I happen to think that human consciousness necessarily requires the grey matter (and neurochemicals and proteins, etc.) in order to be realized. The consequence of this belief, along with the belief that computers aren’t made of grey matter (and neurochemicals…), is that computers aren’t sufficient to realize human consciousness. Now you might want to challenge my belief that certain objects can only be realized via certain materials, or you might want to challenge my belief that brains are one of those objects (hint: this is the weak spot). I’ll be the first to admit that my argument isn’t great, or even good (in fact, that’s exactly how i ended my original post), but I will take issue with the accusation that it’s circular.

    looking back over my post(s), it might appear that i’m a confrontational person, but really i’m not. hopefully i haven’t come across as too aggressive or anything. if so, i apologize. oh, and i’ll try to stop posting such long-winded replies.

    Comment by Adam — November 1, 2006 @ 10:11 pm

  14. “seem that all your requirements for concept possession could be met without mary ever having seen a red object”
    Nope. Recognitional concept possession is developed via consciousness, or at least that would be reasonable to suppose, although I didn’t really present a positive story as to how it works here. She would develop a different concept by watching the device, not the recognitional concept of “red” as we understand and use it. Mary can’t get our recognitional concept of red if she is color-blind to red or if she never sees red. But she can learn all there is no know about experience, that is my claim, that the “feel” is not new knowledge about experience.

    The problem with your biological leanings is that you don’t have a reaon for them, just a strong inutuition. The problem with other minds, in your case, is how do I know that the person in front of me doesn’t have a computer for a brain. Saying that most people are biological is simply dodging the problem, which is that you don’t have positive requirements for what it takes to be conscious and how you can know, you just think that it can’t be something other than biological. What makes biology so special? If I simulate addition on a computer it is just as much addition as addition performed by the brain, why not consciousness too?

    You were begging the question if you argue that we can’t get to the private states of others because the brain can’t be run on computer hardware. And anyways running on other hardware was simply the easiest way. You could theoretically go fiddle with the biological brain to get the same effect, adding sensory apparatus as necessary with the right technology.

    Comment by Peter — November 1, 2006 @ 11:20 pm

  15. I still don’t see how being able to construct a device is sufficient for attributing to mary the knowledge of what it’s like to see red, nor am i clear on what constitutes a ‘recognitional concept of “red” as we understand and use it’. maybe we’re just talking past each other at this point.

    we’re almost certainly talking past each other on the supposed problem of other minds. so, maybe we should be clear about what you mean by the problem of other minds. as i understand it, the problem is that we don’t have any first-hand experience of another individual’s mental life, that we don’t have a rational basis for believing that someone has the same kind of consciousness as ourselves. can we agree on that? the problem stated as such is not the issue at the heart of property dualism. moreover, it’s an universal problem for humans.

    one way of trying to solve the problem, originally put forth by Mill (i think), is the argument from analogy, whereby the fact that the person in front of me is so very similar to myself, combined with the fact that I’m a conscious being, gives me a rational basis for concluding that they, too, have consciousness. One point about this: computers and silicon chips are disanalogous to brain matter, so we shouldn’t expect this argument to carry through for them. there’s one positive argument for you: grey matter and computer chips are materially disanalogous; attributing consciousness relies on an argument from analogy; thus, the argument does not apply to computer chips.

    and if you read carefully, my claim thatt human-type consciousness requires brain matter IS a positive requirement. the consequence is that machines don’t have consciousness like humans is a result of that positive thesis. and i don’t pretend to tell you the epistemic method by which you can judge whether someone else is conscious, much less whether their brain has been supplanted by a computer. i’m merely claiming that it’s necessary that if someone DOES have human-level consciousness that they have a biological brain. the second that you show me someone with human consciousness but no brain, i’ll retract my argument and admit defeat (okay, i probably won’t admit defeat, but i’ll almost definitely revise my argument. ;) ).

    (btw, you’re not simulating addition on a computer; you’re inputing symbols into the machine, which performs a function of addition and spits out a symbol. but calculators don’t process addition in the same way that humans do; see, i have an understanding of the value of two, while calculators merely manipulate the symbols without comprehending ‘two’ in a meaningful way. and do you really think consciousness is even remotely similar to simple addition?!?!)

    I actually think that (a variation of) Chalmers’ zombie works really well as a new problem of other minds… but AGAINST your notion of AI. consider, science may very well create a chip that processes typical human inputs and produces the appropriate behavioral outputs such that it passes the Turin test; such a being would be but a computerized version of Chalmers’ zombie, in that since we have no direct access to their “mind”, we lack any rational basis for attributing consciousness to them. they might scream out when we prick their finger with a pin, but how do we know that they actually felt pain rather than merely performed a programed function to respond with a scream?

    I’m really not sure what was going on in your last paragraph, so it’s difficult to respond. i didn’t bring the problem of other minds into the discussion, and i certainly didn’t use it to argue that the brain can’t be run on computer hardware. my argument against uploading consciousness is based on my property dualism, not difficulties with intersubjective access.

    Comment by Adam — November 3, 2006 @ 12:48 am

  16. “still don’t see how being able to construct a device is sufficient for attributing to mary the knowledge of what it’s like to see red” I didnt say that, I said that such a device would allow her to know which objects are red. Knowing all the physical facts about how minds work in normal people allows her to know all there is about the experience of seeing red in normal people.

    “whereby the fact that the person in front of me is so very similar to myself”
    But you don’t know anything about the similarity of your brains from observing them. They might have a computer inside for all you know. All you know is that you behave similiarly. Such solutions are put forth by behaviorists, who say that all that matters for consciousness is acting in a certain way.

    “my claim thatt human-type consciousness requires brain matter IS a positive requirement”
    but it is not a postive argument supported by reasons as to why that human consicousness couldn’t exist in a computer. It is supported by thought experiments and intuitions. I was looking for a positive requirement in terms of why it had to be a C based system and not Si based one. They both work via electrical charges after all. And why some biological systems have consciousness and some don’t.

    “you’re not simulating addition on a computer”
    addition is an abstract mathematical process. One way of doing addition is through a computer. Another, less reliable way, is through a brain. But they are essentially the same in the sense that they are both doing addition. The proposal is that consciousness is an abstract process, and thus can occur in other places besides brains.

    “since we have no direct access to their “mind”, we lack any rational basis for attributing consciousness to them”
    hence appeals to internal states, behavior simulating chips don’t have the right internal states, but people do, and some computers might

    Comment by Peter — November 3, 2006 @ 1:42 am


RSS feed for comments on this post.

Create a free website or blog at WordPress.com.

%d bloggers like this: