On Philosophy

August 6, 2006

A Better Turing Test

Filed under: Mind — Peter @ 12:56 am

Before you begin reading I would like to point out that today’s post is exploratory, meaning that there are no definite answers to the questions raised. Sorry.

Previously I argued that the Turing test is not a very good measure of consciousness because there are many systems (both computer programs and biological entities) that are conscious but cannot not pass the Turing test, and that it is possible to construct non-conscious programs that can pass it. The Turing test is a good measure of verbal intelligence, but that is only loosely correlated with consciousness.

So what would make a good test? Obviously our criterion for consciousness can’t be solely dependant on external features, because conscious systems can display a wide range of behavior. Some may use language, some might not, some may be active, some may be passive, ect. I propose then we look for some quality in the way the system functions (which of course presupposes that some version of materialism is correct, but that is a problem I have already addressed). Unfortunately can’t hope to give a complete account of exactly what kind of activity in a system is required for consciousness here, since that would be equivalent to solving the mind-body problem, and having a plan for creating artificial consciousness, which is a little too much for a short post.

One thing that might be said to define consciousness (or at least be found in most conscious systems) is thoughts. Intelligent conscious systems will of course have more complex and more structured thoughts than less intelligent ones, but it is hard to see how there could be “something it is to be like” a system (Nagel’s definition of consciousness) without them. It seems reasonable that even the simple consciousness found in animals includes rudimentary thoughts about their environment, a quality that sets them apart from automations.

What is a thought then, and how do we know if a system has them? (And the answer I am looking for obviously must be more definite than the one given here.) As I mentioned above I don’t have the ability to give a complete description of what a thought is, but I consider the following to be essential properties of thoughts: the ability to have a causal effect on future thoughts, and the ability to be about things.

Let me first explain why I think that these properties are essential to thought-ness, and thus to consciousness. The causal connection between thoughts is important because it allows a “stream of consciousness” to come into existence, where subsequent thoughts build on previous ones. Secondly it explains how we can be aware of our own thoughts, specifically by our current thoughts reflecting on previous ones, and this ability probably also explains our experience of self-awareness. The other property of thoughts, their ability to be about other things, is what differentiates them from signals in a non-conscious feedback loop. Thoughts can of course be about many things, the self, other thoughts, abstractions, ect, but I think that all thoughts, even in the simplest of conscious systems, have the ability to be about aspects of the external world. (I admit that in principle it may not be necessary, but if the system can’t think about the external world than it is unlikely to provide us with any reason to suspect that it might be conscious.)

It is relatively easy to tell if a potential thought can have a causal effect on future thoughts if you know the physical facts about the system, but identifying about-ness is significantly more difficult. Experimentally we could look for similarities between thoughts when the system is presented with the same kind of feature of the world (but not the exact same stimulus mind you). Likewise we could look for similarities between the activity that is associated with recognizing a specific feature of the world and between thoughts that cause behavior directed at that feature. This approach of course won’t identify about-ness directed at anything except the immediate objects of perception, and thus will not be as cut and dry as we would like (for example sometimes the system may be presented with a feature but simply be thinking about something else), but unfortunately I can’t think of a better method for detecting about-ness at the moment.

The next step of course is to identify obviously non-conscious systems that this criterion judges to be conscious in order to determine what essential features of consciousness have been excluded. Before you send me your suggestions though I should point out an “obvious” counter-example that fails the test for reasons that may not be so obvious. Say we had a “thought” consisting of two numbers (< a,b >). A subsequent thought is generated as follows: < a+b / 2, c >, where a and b are the previous values and c is the result of some map from external objects to numbers. The reason that this system isn’t conscious by our criterion is that previous thoughts are not the really the cause of subsequent thoughts, only the first element, and thus they don’t meet all the requirements.

Advertisements

Blog at WordPress.com.

%d bloggers like this: