The problem of other minds has been a classic issue in philosophy. How do we know that the people around us are conscious in the same way we are? More recently this question has also been applied to artificial intelligence as follows: how do we know if a computer is conscious in the same way we are? The solution, to both these problems, is widely considered to be the Turing Test. In such a test we engage the subject in a dialogue. If the subject answers “intelligently” we judge them to be conscious. The Turing test then seems like an easy way to divide the conscious from the non-conscious, and thus neatly resolve the problem of other minds.
The test is supposed to guarantee that the subject is conscious in the following way: First we assume that there are only two ways intelligent responses can be generated. One way is to actually be a conscious being. The other is to respond with pre-generated intelligent answers, which implies that the answers have been designed by an intelligent being and the non-conscious machine figures out which one to use by examining its input and comparing it so some kind of complicated look-up table.* However we know that the look-up table method will fail because the machine only has finite resources with which to store answers and rules. Because any question could be asked at any time by the interrogator such a machine would quickly reveal itself to be non-conscious by being unable to find an appropriate intelligent response.**
However the Turing test has a fatal flaw that makes it usable only practically and not conceptually to divide the conscious from the non-conscious. The flaw is that there are at least two ways, in principle, of creating finite machines that could fool an interrogator for the duration of any finite conversation. Method one is to have time travelers design your machine. Since they know before hand which questions will be asked it would be easy for them to pre-program a small set of intelligent answers that would fool the interrogator perfectly. If you don’t have access to a time machine you could instead create your machine by predicting the future. Obviously you couldn’t predict the future of the entire universe with finite resources, but in principle you could predict the local future with a great degree of accuracy given enough data. Even taking into account quantum mechanics, which would imply that you must account for many possible futures, you could still design a machine with a small set of answers that could fool the interrogator.
Admittedly neither of these machines is practically feasible, which means that the Turing test is still a good judge of consciousness in practical cases, but because it can fail in principle it seems that there is more to consciousness than just behavior. What the real dividing line between the conscious and the non-conscious then is how the responses are generated. Are they generated by thoughts or are they generated by rote? Currently I am working on a theory that gives a criterion for dividing between the two, without resorting to examining their behavior, but I am reluctant to discuss it here since it is still a work in progress. Of course I am always willing to listen to reader suggestions.
* Admittedly the method might be more complicated than this, as it could be rule-following to generate the answers as well, but it would still fail the test because it does not have room to store an infinite number of rules. (A finite set of rules that would fool the interrogator is assumed to be a conscious mind, as such rules would need to express a complete phenomenal world in order to answer all questions without being infinite.)
** Another, less serious case, in which the Turing test fails to divide between the conscious and the non-conscious is when the subject refuses to answer. It is perfectly possible for a conscious subject to sit in silence, and a machine could be created that imitated this behavior perfectly without being conscious.