On Philosophy

December 29, 2006

Natural Interpretations

Filed under: Mind — Peter @ 4:12 pm

Functionalism holds that certain kinds of processes are conscious, and that all there is to consciousness is that process (meaning that the conscious experience of a particular individual at a particular moment can be identified in some sense with the physical realization of that process). Some people deny functionalism without giving it serious consideration because they are strongly committed to either free will (of the kind that isn’t compatible with determinism) or the idea of a soul, possibilities that functionalism rules out. I don’t know if it is possible to convince people to give up those kinds of beliefs, and so arguing for functionalism (or even materialism) to them might be impossible. However, these are not the only things that motivate people to reject functionalism. Some may simply want more details, a fully developed theory about what kinds of processes are conscious, before they feel comfortable endorsing the idea. This is the job for more detailed theories, and not my concern here. Another reason for some to reject functionalism is that it holds that anything which can support a complex process, such as computers, could potentially be conscious, and some simply don’t see how data, a sequence of binary digits, could be conscious as we are.

Of course when we look at the human brain we may be equally skeptical. How can a bunch of neurons be conscious? When we look closely at both the human brain and the theoretical conscious computer neither one seems made of parts that simply must be conscious. Certainly this isn’t a fault with functionalism, which says that consciousness only comes about when all the parts are put together in the right sort of whole, but how are we to assure our selves that this is possible? Well how do I know that I am conscious? Because I experience myself as an experiencer. By this I am trying to say that I, as I live, experience not only the world, but experience myself as the subject. When I walk down the street I am aware of not only the street but, if I choose to focus on it, that I am experiencing the street. I think we can extend this to cover the general case; I think it is acceptable to say that if some system experiences itself as an experiencer then it is conscious.

Although better than nothing, this criterion for determining if a system is conscious has its problems. For one it might seem to be circular, certainly only conscious subjects can have experiences, and thus it is no good to us in looking for consciousness, because in order for us to agree that indeed some process is experiencing itself as an experiencer we must have already agreed that it is conscious. We might then modify our criterion by simply substituting a less loaded word for “experiences”, such as “thinks” or “interprets”, but these aren’t substantial improvements, for one it is hard to say when something is thinking or interpreting, and some might argue that they too can only be done by conscious subjects. Let us then approach this problem from a different direction, by considering a simple device with a sensor and a light. When the sensor is over a dark color the light remains off, and when it is over a light color the light turns on. In a very loose sense then the light indicates that the device “thinks” (for lack of a better word) that it is over a light color. Now some might argue that the light itself has no meaning, and that the meaning is in us, that we see the light as meaning something and project that knowledge into the device. But if that is the case consider our light / dark detector and compare it to different devices, one that lights up randomly, one that lights up for the color blue, and one that is always lit. If we believed that the light itself had no meaning then we would have to consider the light being on in each of these devices as essentially the same. But clearly this isn’t the case. The light in each of these devices has what I call a natural interpretation. In the case of two of the devices the light is meaningless, but in one it indicates light colors, and in one it indicates the color blue. I say that this is a natural interpretation because there are facts about the construction of the devices that determine that the lights will come on under certain circumstances, and these facts are independent of us. Now we might be tempted to pass this off as simply correlation, specifically that the light is correlated with certain conditions, but this doesn’t quite capture the idea. For example, one of our devices could be malfunctioning, say a wire has become loose inside it. It is still the case that there are facts about the device, specifically its normal condition, that present the light as indicating something, and thus even in a malfunctioning device the light may still indicate a condition even when that condition isn’t present. This is why it seems natural to use the word “thinks” to describe the device, because in the case of the malfunctioning device it may “think” that there is a light color, but it is “mistaken”. So we can make our criterion for consciousness: if a system “thinks” of itself as an experiencer then it is conscious, using the definition of “thinks” developed here.

Now let us turn to the human brain, something that we can be confident is conscious. Can the human brain meet the criterion we have developed? First we need to develop an idea as to how brain states can be naturally interpreted to be about anything, since the brain is not a simple system. Let’s consider a simple mental concept, such as “dog”. When the concept “dog” is part of my consciousness I assume there is some characteristic pattern of activation or signal, but how, as an outside observer, are we to know that this pattern corresponds to “dog”? Well there are certain facts about the operation of the brain that allow us to know that a specific pattern of activation is most likely to trigger certain other characteristic activities. In the case of “dog” these would probably be mental images of dogs, the word “dog”, facts about dogs, ect. Now there are also facts about the brain that tell us what kinds of activity are likely to result from certain sights and sounds. Thus we can connect the two and determine if any of the characteristic activities likely to be triggered by our pattern of activation under investigation correspond to some kind of sight our sound. And if they don’t we can see what they are likely to trigger and determine what sights and sounds those are. If the pattern we are investigating is really the “dog” concept then the associated sights and sounds will likely be associated with dogs, and the associated concepts will also likely be connected to dogs in some way. Thus looking at a certain pattern we could say that there is a natural interpretation of that pattern as being the concept “dog”.

Obviously it is harder to determine a natural interpretation for a concept like “self”, but I think it can be done. You might start by determining if there is a common concept triggered when the system happens to look at itself (because in people this triggers the idea that I am looking at “my” hand). From there you would look for an associated concept that was likely to bring up information about the current state of that system (for example, when you think about yourself if you are hurt, or tired, or hungry it often springs to your attention). If we can find something like this it would certainly be natural to interpret it as the concept of self. To determine if this self is “thought” of by the system as an experiencer takes an extra step. I would propose seeing if this self concept can trigger any memories, which would seem to indicate that the system thinks of itself as having experienced those events. The best part about this process, however, is that if it were to be carried out on the human brain it seems likely that it would conclude that the brain is conscious, as it should. Now I can’t say that it would definitely succeed because we don’t know enough about how the brain works; but given what we do know, and our own experience of being conscious, it seems likely. And we could carry this process out on a computer as well, given that we have access to both the states the computer is in and the instructions that govern how it moves from one state to another. And thus a computer, constructed properly, could very well be conscious, or at least we could be as sure of the computer being conscious as we are sure that other people are conscious.

Advertisements

Blog at WordPress.com.

%d bloggers like this: