On Philosophy

June 12, 2006

Searle’s Chinese Room

Filed under: Mind — Peter @ 4:59 pm

John Searle is famous for both being a materialist and arguing that computers cannot be conscious (ok, he’s famous for other reasons too, but this is an interesting position for him to take). Of course most materialists think the exact opposite, that because there is no nonphysical mental substance it should be possible that consciousness can occur in other places besides human brains, and sophisticated computers seem like an excellent candidate for consciousness. Thus Searle’s work is especially interesting; is he misunderstanding how computers work, or should we abandon AI research in light of his arguments?

Searle has several reasons for taking this position, but the one I am going to examine here is his famous Chinese room thought experiment (although you probably guessed that already from the title). The Chinese room consists of a man (who doesn’t speak Chinese) and a book of instructions. Into the room are passed papers containing Chinese sentences. The man manipulates the Chinese characters according to the rules in the book and out come appropriate responses in Chinese. We assume that these responses give every indication of consciousness, for example they refer to internal states, and that the room could pass the Turing test. Even though the room’s responses may seem intelligent, and meet the common criteria for having consciousness according to most materialists, it should seem obvious to us (according to Searle) that the Chinese room is not conscious.

Many philosophers of course have disagreed with Searle (for example Douglas Hofstander); claiming that the Chinese room is in fact conscious. I however think that the Chinese room does in fact lack consciousness, but despite this lack it does not imply that computers in general cannot be conscious. I have argued before that intelligence can be separated from consciousness, and I think Searle’s Chinese room is doing just that; it is intelligent but unconscious.

Our intuition that the Chinese room is not conscious comes from two closely connected factors. One is that it can never spontaneously generate output. At least as Searle presents it to us it would seem that the Chinese room simply responds, in a somewhat predictable way, to input. This is not the way consciousness works however, at least in our experience; we are conscious constantly, not just after someone asks us a question. By setting up the thought experiment in this way Searle has almost guaranteed that a stream of consciousness in this system is impossible. It would not necessarily be impossible in all computer systems though, some of which might run constantly and spontaneously converse “whenever they feel like it”.

The second factor making it impossible for the Chinese room to be conscious is that it cannot be self conscious. Once again the thought experiment is presented in such a way that it seems as if the input is to be processed in a linear fashion by the man operating the room. Rules A are applied, then rules B, ect until the results are generated. This is not the way consciousness works however, consciousness contains feedback, such that one thought leads to another thought and then another, until we are rendered unconscious. In a system that proceeds only in a linear manner there is no possibility of consciousness, because consciousness requires thoughts, and thoughts are partly defined by our ability to reflect upon them.

So Searle is right, the Chinese room is not conscious. However let me propose a modification of the Chinese room thought experiment that will make it seem intuitively conscious. We still have a man in a room manipulating Chinese symbols. Now however he is constantly at work; there are papers covered in symbols going everywhere; sometimes papers generated by one set of rules are fed back into the system at an earlier point. Sometime input (in Chinese characters) is added to the room, and it is added to the stack of papers that the man is working with, but it is just one paper among many. Occasionally the rules tell the man to eject one of the papers from the room, this is the output from the system; sometimes it is in response to a question, and sometimes it seems to occur at random. Now let us allow an actual Chinese speaker to examine the contents of the room. If the Chinese speaker was to read some of the papers flying around they might see phrases like ‘I’m bored”, “Jokes are funny”, “I wonder if anyone knows a joke about elephants”, and then a piece of paper might appear as output that says “Hey, does anyone know any good jokes about elephants?”. The Chinese speaker would feel like they are reading the room’s thoughts. Now I think our intuition would lead us to believe that the Chinese room is conscious, or at least my intuition does.

Searle’s Chinese room then is not meaningless, it is important in making the distinction between intelligence, intelligent behavior, and consciousness. However it doesn’t show conclusively that no computer could move beyond behaving intelligently to being conscious.

Advertisements

Hilary Putnam’s Colored Cards Thought Experiment

Filed under: Intentionality — Peter @ 1:20 am

I know that was a long title, but bear with me. In chapter three of his book The Threefold Cord: Mind, Body, and World (amazon) Hilary Putnam describes a thought experiment which seems to undermine many of the internalist assumptions scientists and scientifically minded philosophers tend to make. (For those who have forgotten: internalism is the position that our mental states have their meaning and content independent of the contents of the world. An internalist would think that a thought such as “look, a bus!” would have the same content and meaning regardless of whether the bus was real or a hallucination.) The experiment runs as follows: consider a container of white paint and a deck of one hundred cards. Paint the first card white, then add one drop of red to the white paint and paint the second card. Continue this process until you have painted all one hundred cards. The first and last card will seem to be different colors, but any two cards adjacent in the series (we will call these cards neighbors) will seem to be the same color. Let us then consider the experience at looking at one of these cards. Since the colors of adjacent cards seem the same to us it seems reasonable (under an internalist account) to say that if we had replaced the card we had experienced with one of its neighbors our mental state would have been the same (the two mental states are identical). However we could also replace the second card by its neighbor, and we would now seem forced to conclude that all three mental states are identical. Once again if we continue this chain of reasoning we are forced to conclude that the mental state associated with viewing any of these cards is the same. However the first and last cards are obviously different, and thus it is impossible that our mental states when viewing them should be the same. From this Putnam concludes that it only makes sense to talk about a mental state with reference to the objects in the world (an externalist claim), and thus the mental states created by viewing two different cards, no matter how close in color, are actually different.

As internalists we must either find a flaw with this thought experiment or renounce our views. The mistake lies in the assumption that every neighboring pair of cards really puts us in the same mental state. It is true that some pairs may, but not all pairs, no matter how close they are in “real color”. Of course you shouldn’t simply take my word for it, so keep reading.

First let us consider the physiology of the eye. When looking at any one card the neurons sensitive to color will fire in a way we would say correlates with the color of the card. However the neurons are not perfectly sensitive to changes in color, there are a (nearly) infinite number of physically possible colors, but the neurons in our eyes can only distinguish between a smaller number of colors, so colors that are really physically different may be detected as the same color by the eye. It is thus possible that two neighboring cards may stimulate the same color response in the neurons. However even though runs of several cards may stimulate the eye in the same way, they will not all stimulate the eye in the same way. Even though the cards are spread out evenly along the color spectrum the eye will divide them into smaller groups which the eye classifies as the same color, for the eye is not infinitely sensitive.

Let us move then from the eye then to mental states. Even though the eye perceives a slightly different color in two cards it is still possible that the same mental state may result; the information that embodies the slight difference may be discarded. However much as how the neurons in the eye will divide a continuous spectrum of cards into discrete colors that can be perceived, so will a range of information from the eye be divided into discrete mental states.

Assume then that we number the cards 1, 2, 3, 4, … The eye will then physically perform some mapping from colors of the cards to neuronal signals, possibly like so: 1 -> n1, 2 -> n1, 3 -> n2, 4 -> n2, … Likewise the neuronal impulses could (holding all other conditions the same) map onto mental states, like so: n1 -> m1, n2 -> m2, n3 -> m2, n4 -> m3, … If we accept this description of how the mind works then the flaw in the colored cards thought experiment is obvious; the assumption that any neighboring cards would invoke the same mental state, all other conditions held constant, is impossible. No matter how closely the cards are in color there will be a place where one neuronal impulse stops being generated and another starts, and there will be a place where one mental state stops being generated and another takes its place.

How then do we explain the following phenomenon: when any two cards from the deck are placed side by side they really do seem to be the same color. Well this involves telling a slightly more complicated story about color, but to be brief the colors we actually see are “adjusted” by our unconscious depending on the other colors in our visual field, the surrounding environment, ect. When two colors that are so close in shade are presented to us our unconscious assumes that they really are the same, and that any difference that might be perceived is really due to lighting, and not a real difference in the colors. This makes sense, since to process visual information efficiently our minds must create shapes from color impressions, and if the small variations in color, resulting say from a slightly curved surface, were deemed important we wouldn’t be able to quickly recognize the objects in our environment.

Create a free website or blog at WordPress.com.