Recently I have been evaluating philosophical theories based on how well they explain, which in turn is determined by whether they can produce acceptable answers to all questions that might legitimately be posed to them. By these standards I judge that Armstrong’s representational theory of consciousness to be a good theory. As are a number of higher order theories of consciousness. But although I think that they are good theories I would not agree that they are in fact theories of consciousness. This of course raises the more general question: how can we tell when a theory captures what it intends to? Of course this is a question don’t usually have cause to ask. Normally theories attempt to capture something that is available, at least in part, to normal perception. For example, if we had a theory about where trees grow and we went out and found that the theory correctly described where bushes grew it would be pretty obvious that the theory was actually about bushes and not trees at all. (Of course we would never actually come across such a theory, since scientific theories are motivated by observation, and so don’t have the possibility to wander away from their intended subject matter like philosophical theories do.)
To resolve this problem I appeal to an older idea, and propose that the things we wish to capture with our philosophical theories, such as consciousness, already have ostensive definitions, and that a theory thus only succeeds in capturing it if the theoretical (categorical) definition roughly coincides with the ostensive definition. The ostensive definition is one that defines what we are seeking to describe as a certain kind of thing defined in terms of certain properties that we can be immediately acquainted with. For example, the ostensive definition of a particular species of tree might be “the kind of tree with leaf shape X, bark type Y, and general shape Z”. And of course our scientific theory may very well define that same species in terms of its DNA. But the two coincide, despite their vastly different definitions, because the scientific theory says that organisms with that kind of DNA will usually become trees with the basically the same kind of leaves bark and shape as the ostensive definition says that they have. Thus the theory about that species of tree in terms of its DNA properly captures that species of tree defined ostensively.
So to settle whether a philosophical theory is properly a theory about consciousness it is necessary to have an ostensive definition of consciousness. But that is a bit trickier than you might suppose. We can’t just appeal to our experience of what consciousness is like, because experience itself is very likely to be part of the theory. Thus such a definition wouldn’t really be ostensive at all, but rather predetermine, to some extent, what form the theory must take (for example, it wouldn’t allow the theory to claim that we were in error about experience). Thus we must take a step further back, and ask ourselves what other properties consciousness has. The key feature of consciousness, outside of facts about the nature of our experience, is that all normal adult humans are supposed to have it. Which means that consciousness is the thing that is common to all people minus whatever varies between them. All that remains to get an ostensive definition of consciousness then is to define what varies from person to person. And that is pretty easy, the quick list is: personality, memories, intelligence, capacity to learn, and emotional dispositions. And this gives us an easy test to determine when we have a theory that properly captures consciousness, or are at least when we are close to one. We can take what the theory calls consciousness, in abstract, and then add in all our individuating mental capacities. If what results could very well be us (if we, who are the authority on being us, couldn’t tell the difference between our current experience of consciousness and this new one so defined) then we have a theory of consciousness.
Let’s apply this ostensive definition then and see why these theories fail to capture consciousness, and thus, by extension, what kind of theory could capture it. First of all the ability to represent the external world by itself clearly doesn’t constitute consciousness, because it doesn’t explain certain facts about experience (namely that out experiences them selves are, from moment to moment, available to us). Higher order theories account for this somewhat (although to give rise to experiences as we experience them they would have to be slightly refined to say that the systems contain representations of previous internal states as a result of direct input/information form past states, and not, say, by deduction). But these theories fail to guarantee that the experience of such systems will be experience from a point of view. And assuming the theory could account for the existence of a point of view it would then need to say why the system would think of its own internal states as having qualia. But, given a theory that could accommodate all that, I think that we wouldn’t be able to tell the difference between being as we are now or such a system plus our individuating mental characteristics. Of course there is the possibility that there is some characteristic of our mental lives that is missed by this account, but that is a fact that I accept by virtue of my current approach to philosophy, that we can never rule out the possibility of someone asking “what about fact X?”.