Recently I have been considering the idea that it is only possible to determine if a system is conscious by examining how the state of the system in a given instant is connected to previous and subsequent moments, as well as the properties of the system at that moment (which are all some theories consider relevant for consciousness). This same approach can be used to examine intentionality, specifically to address the question “which systems have intentionality?”
When we ask that question we are specifically interested in which systems have a “primary” about-ness. We intuitively understand that there are many kinds of things that can be about other things, for example photographs are about their contents. However, many of the things we consider to have about-ness have “secondary” about-ness, meaning that they are only about the things that they represent because beings exist who interpret them as being about things. A photo of a dog is only about a dog because it invokes in us sensations similar to those that we have when we see a real dog. To beings with different methods of visualizing the world that very same photo wouldn’t be about anything at all, and their equivalent of photos wouldn’t be about anything to us. Thus the photo’s about-ness depends on other things. In contrast, when a person sees a dog their experience is about the dog no matter who or what else exists in the world, and so in an important sense this about-ness, which we call intentionality, is more important.
Just as when investigating consciousness, problems arise when attempting to find some criterion for an intentional relation in a specific instant. No arrangement of matter, it would seem, could be intentional because we could take an image of that arrangement, and, despite the fact that the image preserves all the relevant information, the image would not have intentionality. As with consciousness, the solution is to consider whole system, not just a specific instant of it. Fred Dretske, in his paper “A Recipe For Thought”, proposed that there were some systems, such as compasses, that possessed a primitive intentionality. The criterion for this primitive intentionality is that a property, P, of a system is about some feature of the world, C, if and only if the presence of P is usually caused by C. Thus a compasses’ needle might be “about” the direction of the North Pole, since its orientation is usually caused by the earth’s magnetic field.
There is one problem with this account, however, which is that it still seems like a person or other beings are still required to give the orientation of the compasses’ needle its meaning. Although the needle would still point north in a world devoid of people it seems possible that it wouldn’t be about the direction. To remedy this we add the criterion that not only must P usually be caused by C but that P must cause the system to act (including thought-acts) as if C. So a normal compass wouldn’t be about the North Pole, but a simple robot that was programmed to move in a northwards direction based on information from a compass would be. Of course behavior is how we usually determine whether a system has intentionality, since it is what we can most easily observe, but internal changes (thoughts) also count, as mentioned above, and so it is possible that some completely immobile systems have intentionality, although we might never know it.
This account fits perceptual intentionality well, but it still needs some work to cover all cases of “primary about-ness”. Most significantly we have thoughts about external objects that are about them, but these thoughts are not caused (directly) by those objects. Fortunately, the account we have been developing requires only a small change to account for this. Instead of requiring that C be the usual cause of P we instead require it to be the usual external cause of P, allowing P to be caused by internal processes as well (specifically whatever unconscious processes produce thoughts). And instead of requiring P to result in acts as if C we instead require P to result in acts appropriate to the mode of presentation as if C. This is simply a way of saying that when P is invoked because of perception one set of acts is expected to result, those appropriate for C really being there, while when P is invoked because of a “supposing” then another set of acts result, those as if possibly C, and so on for all the distinct ways in which P can be invoked.
Finally I should probably mention that this account of intentionality is not a form of externalism (although Dretske does develop his theory into an externalist account). This is because intentionality, as presented here, is not a part of the mind, instead it is a way that we can describe or talk about the mind. As far as the operation of the mind is concerned the cause of P is irrelevant (in the sense that the mind will have the same sequence of states, and the same consciousness, regardless of whether P at a particular moment is caused by C or by something else).