On Philosophy

April 30, 2006

Who Is Conscious?

Filed under: Mind — Peter @ 1:08 am

The problem of other minds has been a classic issue in philosophy. How do we know that the people around us are conscious in the same way we are? More recently this question has also been applied to artificial intelligence as follows: how do we know if a computer is conscious in the same way we are? The solution, to both these problems, is widely considered to be the Turing Test. In such a test we engage the subject in a dialogue. If the subject answers “intelligently” we judge them to be conscious. The Turing test then seems like an easy way to divide the conscious from the non-conscious, and thus neatly resolve the problem of other minds.

The test is supposed to guarantee that the subject is conscious in the following way: First we assume that there are only two ways intelligent responses can be generated. One way is to actually be a conscious being. The other is to respond with pre-generated intelligent answers, which implies that the answers have been designed by an intelligent being and the non-conscious machine figures out which one to use by examining its input and comparing it so some kind of complicated look-up table.* However we know that the look-up table method will fail because the machine only has finite resources with which to store answers and rules. Because any question could be asked at any time by the interrogator such a machine would quickly reveal itself to be non-conscious by being unable to find an appropriate intelligent response.**

However the Turing test has a fatal flaw that makes it usable only practically and not conceptually to divide the conscious from the non-conscious. The flaw is that there are at least two ways, in principle, of creating finite machines that could fool an interrogator for the duration of any finite conversation. Method one is to have time travelers design your machine. Since they know before hand which questions will be asked it would be easy for them to pre-program a small set of intelligent answers that would fool the interrogator perfectly. If you don’t have access to a time machine you could instead create your machine by predicting the future. Obviously you couldn’t predict the future of the entire universe with finite resources, but in principle you could predict the local future with a great degree of accuracy given enough data. Even taking into account quantum mechanics, which would imply that you must account for many possible futures, you could still design a machine with a small set of answers that could fool the interrogator.

Admittedly neither of these machines is practically feasible, which means that the Turing test is still a good judge of consciousness in practical cases, but because it can fail in principle it seems that there is more to consciousness than just behavior. What the real dividing line between the conscious and the non-conscious then is how the responses are generated. Are they generated by thoughts or are they generated by rote? Currently I am working on a theory that gives a criterion for dividing between the two, without resorting to examining their behavior, but I am reluctant to discuss it here since it is still a work in progress. Of course I am always willing to listen to reader suggestions.

* Admittedly the method might be more complicated than this, as it could be rule-following to generate the answers as well, but it would still fail the test because it does not have room to store an infinite number of rules. (A finite set of rules that would fool the interrogator is assumed to be a conscious mind, as such rules would need to express a complete phenomenal world in order to answer all questions without being infinite.)

** Another, less serious case, in which the Turing test fails to divide between the conscious and the non-conscious is when the subject refuses to answer. It is perfectly possible for a conscious subject to sit in silence, and a machine could be created that imitated this behavior perfectly without being conscious.

Advertisements

2 Comments

  1. I am not sure if I understand why there needs to be infinite memory for a machine to pass the turing test. Is it not enough for the machine to know all of the knowledge that the collective intelligence of the human race has? If so, doesn’t the world wide web represent nearly all or most of it? (Atleast, can it not represent in the not so near future?)

    If so, can there not be an AI engine which is a computer with huge processing power and connected to the internet and thus has at its disposal all the knowledge that the human race has ever understood? Forgive me, if that was too naive.

    Comment by Pi — May 21, 2006 @ 12:58 pm

  2. Even though there are only finite number of words in the English language there are an infinite number of possible questions. Thus if the machine was working by look-up it would have to store an answer for each question if it was to pass the Turning test. If it could reason it wouldn’t need access to all known information anyways, since people often are ignorant, so the machine could perform intelligently simply by responding “I don’t know” in a consistent manner. Remember the Turing test is meant to reveal if the machine is conscious, not if it can answer correctly any question put to it. The ability to know the correct answer and the ability to answer intelligently are two separate things.

    Comment by Peter — May 21, 2006 @ 1:21 pm


RSS feed for comments on this post.

Blog at WordPress.com.

%d bloggers like this: