John Searle is famous for both being a materialist and arguing that computers cannot be conscious (ok, he’s famous for other reasons too, but this is an interesting position for him to take). Of course most materialists think the exact opposite, that because there is no nonphysical mental substance it should be possible that consciousness can occur in other places besides human brains, and sophisticated computers seem like an excellent candidate for consciousness. Thus Searle’s work is especially interesting; is he misunderstanding how computers work, or should we abandon AI research in light of his arguments?
Searle has several reasons for taking this position, but the one I am going to examine here is his famous Chinese room thought experiment (although you probably guessed that already from the title). The Chinese room consists of a man (who doesn’t speak Chinese) and a book of instructions. Into the room are passed papers containing Chinese sentences. The man manipulates the Chinese characters according to the rules in the book and out come appropriate responses in Chinese. We assume that these responses give every indication of consciousness, for example they refer to internal states, and that the room could pass the Turing test. Even though the room’s responses may seem intelligent, and meet the common criteria for having consciousness according to most materialists, it should seem obvious to us (according to Searle) that the Chinese room is not conscious.
Many philosophers of course have disagreed with Searle (for example Douglas Hofstander); claiming that the Chinese room is in fact conscious. I however think that the Chinese room does in fact lack consciousness, but despite this lack it does not imply that computers in general cannot be conscious. I have argued before that intelligence can be separated from consciousness, and I think Searle’s Chinese room is doing just that; it is intelligent but unconscious.
Our intuition that the Chinese room is not conscious comes from two closely connected factors. One is that it can never spontaneously generate output. At least as Searle presents it to us it would seem that the Chinese room simply responds, in a somewhat predictable way, to input. This is not the way consciousness works however, at least in our experience; we are conscious constantly, not just after someone asks us a question. By setting up the thought experiment in this way Searle has almost guaranteed that a stream of consciousness in this system is impossible. It would not necessarily be impossible in all computer systems though, some of which might run constantly and spontaneously converse “whenever they feel like it”.
The second factor making it impossible for the Chinese room to be conscious is that it cannot be self conscious. Once again the thought experiment is presented in such a way that it seems as if the input is to be processed in a linear fashion by the man operating the room. Rules A are applied, then rules B, ect until the results are generated. This is not the way consciousness works however, consciousness contains feedback, such that one thought leads to another thought and then another, until we are rendered unconscious. In a system that proceeds only in a linear manner there is no possibility of consciousness, because consciousness requires thoughts, and thoughts are partly defined by our ability to reflect upon them.
So Searle is right, the Chinese room is not conscious. However let me propose a modification of the Chinese room thought experiment that will make it seem intuitively conscious. We still have a man in a room manipulating Chinese symbols. Now however he is constantly at work; there are papers covered in symbols going everywhere; sometimes papers generated by one set of rules are fed back into the system at an earlier point. Sometime input (in Chinese characters) is added to the room, and it is added to the stack of papers that the man is working with, but it is just one paper among many. Occasionally the rules tell the man to eject one of the papers from the room, this is the output from the system; sometimes it is in response to a question, and sometimes it seems to occur at random. Now let us allow an actual Chinese speaker to examine the contents of the room. If the Chinese speaker was to read some of the papers flying around they might see phrases like ‘I’m bored”, “Jokes are funny”, “I wonder if anyone knows a joke about elephants”, and then a piece of paper might appear as output that says “Hey, does anyone know any good jokes about elephants?”. The Chinese speaker would feel like they are reading the room’s thoughts. Now I think our intuition would lead us to believe that the Chinese room is conscious, or at least my intuition does.
Searle’s Chinese room then is not meaningless, it is important in making the distinction between intelligence, intelligent behavior, and consciousness. However it doesn’t show conclusively that no computer could move beyond behaving intelligently to being conscious.