On Philosophy

June 12, 2006

Searle’s Chinese Room

Filed under: Mind — Peter @ 4:59 pm

John Searle is famous for both being a materialist and arguing that computers cannot be conscious (ok, he’s famous for other reasons too, but this is an interesting position for him to take). Of course most materialists think the exact opposite, that because there is no nonphysical mental substance it should be possible that consciousness can occur in other places besides human brains, and sophisticated computers seem like an excellent candidate for consciousness. Thus Searle’s work is especially interesting; is he misunderstanding how computers work, or should we abandon AI research in light of his arguments?

Searle has several reasons for taking this position, but the one I am going to examine here is his famous Chinese room thought experiment (although you probably guessed that already from the title). The Chinese room consists of a man (who doesn’t speak Chinese) and a book of instructions. Into the room are passed papers containing Chinese sentences. The man manipulates the Chinese characters according to the rules in the book and out come appropriate responses in Chinese. We assume that these responses give every indication of consciousness, for example they refer to internal states, and that the room could pass the Turing test. Even though the room’s responses may seem intelligent, and meet the common criteria for having consciousness according to most materialists, it should seem obvious to us (according to Searle) that the Chinese room is not conscious.

Many philosophers of course have disagreed with Searle (for example Douglas Hofstander); claiming that the Chinese room is in fact conscious. I however think that the Chinese room does in fact lack consciousness, but despite this lack it does not imply that computers in general cannot be conscious. I have argued before that intelligence can be separated from consciousness, and I think Searle’s Chinese room is doing just that; it is intelligent but unconscious.

Our intuition that the Chinese room is not conscious comes from two closely connected factors. One is that it can never spontaneously generate output. At least as Searle presents it to us it would seem that the Chinese room simply responds, in a somewhat predictable way, to input. This is not the way consciousness works however, at least in our experience; we are conscious constantly, not just after someone asks us a question. By setting up the thought experiment in this way Searle has almost guaranteed that a stream of consciousness in this system is impossible. It would not necessarily be impossible in all computer systems though, some of which might run constantly and spontaneously converse “whenever they feel like it”.

The second factor making it impossible for the Chinese room to be conscious is that it cannot be self conscious. Once again the thought experiment is presented in such a way that it seems as if the input is to be processed in a linear fashion by the man operating the room. Rules A are applied, then rules B, ect until the results are generated. This is not the way consciousness works however, consciousness contains feedback, such that one thought leads to another thought and then another, until we are rendered unconscious. In a system that proceeds only in a linear manner there is no possibility of consciousness, because consciousness requires thoughts, and thoughts are partly defined by our ability to reflect upon them.

So Searle is right, the Chinese room is not conscious. However let me propose a modification of the Chinese room thought experiment that will make it seem intuitively conscious. We still have a man in a room manipulating Chinese symbols. Now however he is constantly at work; there are papers covered in symbols going everywhere; sometimes papers generated by one set of rules are fed back into the system at an earlier point. Sometime input (in Chinese characters) is added to the room, and it is added to the stack of papers that the man is working with, but it is just one paper among many. Occasionally the rules tell the man to eject one of the papers from the room, this is the output from the system; sometimes it is in response to a question, and sometimes it seems to occur at random. Now let us allow an actual Chinese speaker to examine the contents of the room. If the Chinese speaker was to read some of the papers flying around they might see phrases like ‘I’m bored”, “Jokes are funny”, “I wonder if anyone knows a joke about elephants”, and then a piece of paper might appear as output that says “Hey, does anyone know any good jokes about elephants?”. The Chinese speaker would feel like they are reading the room’s thoughts. Now I think our intuition would lead us to believe that the Chinese room is conscious, or at least my intuition does.

Searle’s Chinese room then is not meaningless, it is important in making the distinction between intelligence, intelligent behavior, and consciousness. However it doesn’t show conclusively that no computer could move beyond behaving intelligently to being conscious.

Advertisements

23 Comments

  1. I think an even better extension would be to replace the man in the room with a bureaucracy, with detailed procedure guides and some levels, general policy rules at others, but all still controlled by the rules (e.g., there is no “CEO” that can change the rules, though there may be rules that cause other rules to be modified).

    Searle put a single man in the room for a reason. He wanted (I think dishonestly) to lure his audience into thinking of the man as the only thing in the room that might know Chinese. Then by pointing out that the man didn’t know Chinese and the system as a whole “obviously” couldn’t know Chinese (argument by incredulity), then the appearance that it did was misleading.

    — MarkusQ

    Comment by MarkusQ — June 12, 2006 @ 9:00 pm

  2. I would still argue that a bureaucracy, if the information was processed linearly, could not carry out a process that we would consider conscious. It is true that Searle is playing with the reader a bit when he presents his set-up, but despite that I think it does have some validity. I could imagine a program that fools people, but has no thoughts.

    Comment by Peter — June 12, 2006 @ 10:54 pm

  3. “I would still argue that a bureaucracy, if the information was processed linearly, could not carry out a process that we would consider conscious.”

    This seems like an odd statement to me.

    “Linear” in the mathematical sense (changes in output are strictly proportional to changes in inputs) clearly couldn’t apply here, since such a system couldn’t begin to produce cogent responses to natural language, Chinese or otherwise. (Ditto for the more relaxed sense of lacking feedback loops).

    If you mean “sequential”, that too is odd. There’s no significant difference, in an absolute, philosophical sense, between a sequential and a parallel process. Even in practical terms, a modern computer, with a single processor which is operating sequentially, can appear to be doing many things in parallel. The distinction isn’t nearly as fundemental as you make it out to be (if that is indeed you point).

    Further, what sort of bureaucracy are you imagining that has this property of linearity that you refer to?

    And finally, if you can imagine a program that has no thoughts but fools people into thinking it does, can you discount the possibility that there might be “people” among us who are mere automatons, devoid of thought but fooling us? How do you resolve such a position without slipping into either vitalism on the one hand or solipsism on the other?

    –MarkusQ

    Comment by MarkusQ — June 13, 2006 @ 6:03 am

  4. If input A is transformed to input B by rules A’, which is in turn transformed to C by B’, ect this is a linear process. Think computer science here. Not all computer programs function in this way, in fact few do; the human brain definitely does not. However one could imagine a linear process of this sort generating intelligent output. I can spell that out in more detail if you like. But this linear process couldn’t be conscious as we understand it.

    We would say that people aren’t fooling us because we know roughly how their brains work, and the method of their operation is one that guarantees consciousness.

    Comment by Peter — June 13, 2006 @ 11:56 am

  5. However one could imagine a linear process of this sort generating intelligent output. I can spell that out in more detail if you like.

    You might imagine it, but you couldn’t construct it. If I recall Searle’s paper correctly, the system was supposed to respond reasonably to conversation in Chinese. Even with an infinite set of rules you couldn’t do this with a purely feed-forward system. Consider the following dialog (in English, for benefit of our readers who are not fluent in Chinese):

    What is one plus one?
    TWO
    What did you answer to the last question?

    Without at least the ability to feed the output of the final ruleset back into the initial ruleset (or to maintain internal state, which would likewise break your model) you couldn’t even handle this simple conversation. And anything that could handle these sorts of questions would be as powerful (in Turing’s sense) as any parallel computer.

    –MarkusQ

    Comment by MarkusQ — June 13, 2006 @ 11:52 pm

  6. I knew that you were going to bring up just that point. Even in a linear system you will admit it is possible to store past inputs (although not past outputs). When asked that question what the system must do is launch a subsystem that (linearly) computes its output to that previous question and then uses that information to compute the answer. Of course it is possible that the subsystem in turn might need to invoke a subsystem, and without a loop we could only have finitely many subsystems to compute previous answers. Let us assume then that we have 100 billion such subsystems nested together. After 100 billion questions the system must answer: “I forgot that question, it was so long ago”. But in this case it would still fool you, since a human probably would have forgotten as well.

    Comment by Peter — June 14, 2006 @ 12:05 am

  7. Couldn’t you just ask the Chinese room, “Hey, would you telling me the result of running this Turing machine for an arbitrary number of loops?”

    Comment by Carl — June 14, 2006 @ 1:16 am

  8. So by “linear” you mean that the response must be a loop-free non-recursive pure function of the input to that point. Correct?

    Let us assume then that we have 100 billion such subsystems nested together. After 100 billion questions the system must answer: “I forgot that question, it was so long ago”.

    Granted, this will enable you to recover your responses to the earlier questions. But it fails on two points:

    * If we restrict ourselves to only using the matter in the known universe, you will soon run out of things to write your rules on. Why? Because we also need to deal with a whole host of other questions, such as:

    What do you think of Al Gore’s new movie?
    I HAVEN’T SEEN IT.
    What’s your Mother’s name?
    MOM.
    Did she see it?
    NO.
    How about her Mom.
    SHE SAW IT.
    What did she think of it?

    French is an interesting language. One of the things you can do in French that you can’t do in Chinese is construct palindromes. Do you know what that is?
    NO
    A palindrome is a sentence in an alphabetic language that reads the same if the order of the letters is reversed. Which of these sentences are palindromes?…

    And so forth. So you won’t be able to get away with just one sub-system, you’ll need several (dozens, probably) at each level. And since they can refer to each other (and to the general rules) each of them will have several full copies of the rules. So you will be looking at not 10000000000 (1e10) copies of your original system, but something more like 10^10000000000 or 100^10000000000 (1e1000000000 to 1e20000000000) copies. You can try to get around this by evaluating the subsystems sequentially (trading time for space) but you’ll just run out of time in the same way (and you still won’t have enough space).

    * Even if you could get around these problems (say, by begging for an infinite Universe to work in) all you would have done is increased the power of your system to the point where it was equivalent to a very large (but possibly still finite) Turing machine, and thus to any real computer, parallel or otherwise.

    — MarkusQ

    Comment by MarkusQ — June 14, 2006 @ 6:48 am

  9. So we have gotten to a point where you are simply arguing that it is too hard or too large. This means that you have conceded Searle’s conceptual point, that in principle the Chinese room is not necessarily conscious, and that it can fool you without being so.

    Carl- A person couldn’t do that either, so it is reasonable to allow the machine to fail. Can you predict the operation of your computer after several million operations?

    Comment by Peter — June 14, 2006 @ 7:04 am

  10. So we have gotten to a point where you are simply arguing that it is too hard or too large. This means that you have conceded Searle’s conceptual point, that in principle the Chinese room is not necessarily conscious, and that it can fool you without being so.

    Oh give me a break.

    * I’m not arguing “too hard” or “too large”; I’m pointing out that it is impossible to do in this reality. (It may be possible to do in some hyper-reality, but, as you point out elsewhere, it would be meaningless to assert that).

    * Were’s just playing in the foothills here. There are things that can easily be computed by a finite general computer that can not be computed by a finite non-recursive pure function. So “too large” in any case really is just that. At the very least, the fact that the rule book would have to infinitely large means that it couldn’t be in a room, so I could in a moment of whimsy grant you your infinitely large book of rules without having to conceded the existence of the system as a whole.

    * Second, I’m not arguing that the Chinese room is impossible, only your straw-man non-recursive pure function implementation of it.

    * Third, I haven’t “conceded Searle’s conceptual point” (which I find disingenuous, at best), since we haven’t been talking about his Chinese room but yours, and I’ve been arguing not that it could fool me but that you could not, even in principle, build it.

    * If you are going to insist that I be convinced of unlikely things by palpably impossible constructs, what’s to stop me from positing a amazingly persuasive unicorn that could convince you that you are, in fact, a brain in a vat? Granted, I could never even begin to make such a thing, but if I did it would by definition convince you, so you might as well give up now. Conversely, if my unicorn can’t convince you, why should I be convinced by your “arguments”?

    –MarkusQ

    Comment by MarkusQ — June 14, 2006 @ 8:21 am

  11. * I’m not arguing “too hard” or “too large”; I’m pointing out that it is impossible to do in this reality. (It may be possible to do in some hyper-reality, but, as you point out elsewhere, it would be meaningless to assert that).

    This caused me to, quite literally, laugh out loud.

    Sorry, Peter.

    Comment by Carl — June 14, 2006 @ 8:34 am

  12. One better than the unicorn:

    Hidden deep, deep, deep in the depths of all irrational numbers like pi are the binary ascii codes for an argument so convincing that it would instantly persuade you to change all your beliefs to accord with mine. Since we already know that these digit must exist, based on the nature of irrational numbers (randomly vary, no repeating, eventually they hit all patterns of some arbitrary length…), my argument is already known to exist. QED.

    Comment by Carl — June 14, 2006 @ 8:42 am

  13. Carl – You have not demonstrated that a convincing argument exists, only that all possible arguments exist. Do you really think there is a correct and convincing argument for 2 + 2 = 6?

    We arguing here about which system are conscious, to do so require arguments about which systems can possibly display intelligent behavior and thus we invoke possible worlds talk, we are not arguing for the existence of any particular possible world.

    Comment by Peter — June 14, 2006 @ 9:09 am

  14. I did a three part series dealing with the Chinese Room Argument and came to a conclusion, following Terrence Deacon’s lead, which is very similar to Peter’s. I must admit, however, that Deacon’s arguments against AI are far more convincing than his arguments for it’s possibility that I review in my third post. I think that major problem which the Chinese Room presents is this: “Of what would the room be conscious?” If it is only conscious of words but not the objects which these words refer to then this is simply not knowing the language in any meaningful way at all. Additionally, I wonder if it is possible to learn an language in an inferentially linguistic manner without exposure to the object which the signs refer to. While such a program may recognize the patterns in the word usage, I’m not sure that I would ever call this capacity “linguistic.”

    Comment by Jeff G — June 14, 2006 @ 11:42 am

  15. Thanks for the links Jeff. Great blog by the way.

    Comment by Peter — June 14, 2006 @ 12:17 pm

  16. As someone with a bit of a computer science background I have 4 comments:

    1) Searle is quite clearly misunderstand how computers work. The mechanism (a lookup-table) used to manipulate the symbols is not a universal turing machine (UTM), for one key reason: it cannot modify itself. Since the instructions of a UTM are encoded on the “track”, it is possible to write UTM programs which modify thier instructions. The instructions for the Chinese Room are fixed, perventing its behavioural set from including all possible computer behaviours. Since computers can doing things at a computational level which that the Chinese Room cannot, it is invalid to call the Chinese Room a “computer”, which means it cannot be use as an arguement for or opposing “artifical intelligence”.

    2) Your first arguement is contains an implict assumtion that humans don’t have a constant input stream, which is clearly false. Every moment in our lives we are recieving “input” from the hunderds of thousand of nerves in our bodies. Most of it may normally ignored by conscious processes, but (excluding cases of brain damage) at any given moment it can become part of our consciousness. Discounting our biology’s effect on our conscious is a major mistake.

    3) Your suggestion for an improved (while still flawed (see comment 1)) Chinese Room arguement is comparable to a multi-tasking computer operating system, or a client/server system (such as a web-server). I wouldn’t consider it consciousness.

    4) I personally believe in strong AI, and my general arguement for Searle suggestion is this: Suppose we have a computer system which exactly duplicates a human brain. Searle asserts that “Brains causes Minds”. Therefore it has a mind. Since it is a computer, regardless of its implementation, we know that there exist a UTM which is equilent to the computer system we supposed earlier using purely computational methods (ie. syntax manipulation, which is basically the same). This UTM is not a mind by Searle’s assertation. Therefore we have two things which are equialent, but one has a mind, and the other does not. Therefore, one of Searle’s assertions is wrong.

    Comment by Michael Mills — June 29, 2006 @ 5:44 pm

  17. 1) Some programs do work as Searle describes them, I know, I AM a programmer (for example see my other blog). Such programs cannot be conscious, although I argue that it is possible for some programs to be conscious, just not this kind.

    2) We don’t need a constant input stream to be conscious. However our behavior is not based only on the input of the moment. That is how Searle’s machine operates, and I would argue that to operate in that way would make consciousness impossible.

    3) I didn’t say that it was all there was to consciousness, simply a pre-requisite. Read some of the other posts.

    4) Searle would argue that no computer system can completely duplicate the human mind. I can actually prove that this is true. The mind is a neural net with infinite precision on its signal strength (it is analog). No Turing machine can simulate such a set-up completely accurately (I can find the reference to the paper given enough time if you wish). Therefore if your argument rests on the assumption that the mind can be simulated by a Turing machine it is doomed to failure.

    Comment by Peter — June 29, 2006 @ 11:21 pm

  18. 1) I argee. The point was that Searle’s arguement was based on a single type of program we know to be non-conscious, and he was attempting to describe all programs with it.

    4) I wasn’t we simulate the mind, but rather the brain. Simulate the biology, using a system like the one you described to simulate brains cells. Given that a celluar automata can be replicated by a turing machine, we can build a computer which acts like a brain cell, then we can create a “nerual” network by connecting several of these cells (in a network like the internet), creating a mind by emulating the biology which contains it. At this point since we know that any computer system can be duplicated by a UTM, and UTMs can only perform numerical operations (which by Godel, we know are basically the as symbol manipulation) we have a brain/mind built using only symbol manipulation. Therefore by Searle’s asserts we know that one is a mind, and the other is not. Either I’ve done something wrong, or one of Searle’s asserts is incorrect (although we don’t know which one). As for the paper, sure give me a reference, sounds like an interesting read.

    Comment by Michael Mills — June 30, 2006 @ 7:19 am

  19. But we can’t create a Turing machine that acts exactly like a brain cell because of the infinite precision of their signals (electrical strength), this is the problem. Searle didn’t argue that only brains could be conscious, just that Turing machines could not.

    Comment by Peter — June 30, 2006 @ 8:59 am

  20. OK that make sense.
    The point I was trying make was that since at the lowest level all computer programs are doing the same thing, and if you support weak AI, you must allow strong AI becuase at some level the weak AI program is being turned into a purely computation program (so that it can be executed), and that program is a strong AI program as it uses only computation methods. At the only level that matters to the computer, there is no difference between strong and weak AI, in fact it is possible (if highly unlikely) that a strong AI program and a weak AI program could produce the same machine code when complied.

    Comment by Michael Mills — June 30, 2006 @ 10:13 am

  21. “At the only level that matters to the computer, there
    is no difference between strong and weak AI, in fact it is possible (if highly unlikely) that a strong AI program and a weak AI program could produce the same machine code when complied.”
    There is no evidence for this claim, and it seems pretty obviously false to me. For example consider the syntax semantics gap. Sure they both do computation, but there is no requirement that they do it in the same way, and it is the process that matters to consciousness, not the fact that they both run on computers

    Comment by Peter — June 30, 2006 @ 12:36 pm

  22. The Luminous Room: A hilarious takedown of the Chinese Room argument that I ran across while reading The Stanford Encyclopedia of Philosophy’s article on the Chinese Room.

    Weak EM is the claim that light can be “instructively simulated” by the behavior of interacting electric and magnetic fields. This claim is uncontroversial. However, there is a stronger claim known as strong EM, and it is not only controversial, in my (the arguer’s) opinion it is false. Strong EM is the claim that not only can light be simulated by the behavior of interacting electric and magnetic fields, but that light is identical with electromagnetic (EM) waves. This claim is easily refuted by the following argument:

    (1) Electricity and magnetism are physical forces.

    (2) The essential nature of light is original visibility.

    (3) Physical forces, no matter how they are deployed, are neither identical with, nor sufficient for, original visibility.

    Therefore,

    (4) Electricity and magnetism are neither identical with, nor sufficient for, light.

    The truth of the first two premises of this argument is indisputable, and a short thought experiment will establish the truth of premise (3).

    Comment by Carl — February 9, 2007 @ 8:36 pm

  23. First presented in “the rediscovery of light” by Paul M. Churchland in the “journal of philosophy” vol. 93 pages 211-228 in 1996. As you might have guessed I have already read it. In fact it’s in my references list, which is why I have all the information. You can also adapt it to be about qualia.

    Comment by Peter — February 9, 2007 @ 8:46 pm


RSS feed for comments on this post.

Blog at WordPress.com.

%d bloggers like this: