On Philosophy

May 2, 2007

Thesis Draft, Section 3, Revised

Filed under: Mind — Peter @ 12:00 am

3: Intentionality

Intentionality, loosely speaking, is the ability of the mind (or parts of the mind) to be about (directed at) objects in the external world. At first glance this might not seem like a problem; since complex physical systems can do a number of interesting things, and it doesn’t seem impossible that being intentionally related to objects in the world might simply be one more of those things. However, some claim that material systems can’t be intentional, and thus to set these doubts aside we need to describe how a completely physical system could justifiably be said to be intentional.

Part of the problem of intentionality is that we don’t know exactly what we mean by intentionality. Is it just being “directed at” the external world, or is it something more? Is our understanding of meaning somehow caught up in this notion, implying that we need to explain how our thoughts really mean something and aren’t “just” a complicated pattern of behavior. As I see it the problem of intentionality arises primarily from these confusions, not from an inherent difficulty in fitting it into the physical world.

3.1: Separating Intentionality From Meaning

One of the confusions that arises in connection with intentionality is concerns about what it takes for a thought or mental event to have meaning. Let us suppose that we have a theory that describes how certain physical systems are related to the external world in ways that seem like they are directed at or about that world. Maybe they have an internal representation of the external world that informs their behavior, and because this internal model is an accurate reflection of the external world we can thus say that the behavior of those systems is thus informed by, and thus appropriately “directed at” the world . But it seems like we could still wonder if the relations of this system to the world were meaningful. Does it purposefully direct its behavior at the world? But to demand that our theory of intentionality do this is to demand that it explain consciousness as well, because there it is reasonable to say that nothing is intrinsically meaningful except in a conscious system. But if we require our theory of intentionality to include a theory of consciousness then we are asking too much of our theory of intentionality. We are discussing intentionality here in order to dispel worries that a materialist theory of the conscious mind might be impossible because intentionality can’t be present in a purely physical system. But if make intentionality necessarily connected to consciousness, by tying it to meaning, then the objection becomes ill-formed; essentially the objections would then be to claim that we can’t develop a material theory of the conscious mind because consciousness can’t be captured in a material theory. Of course whether consciousness can be captured by a material theory is something I will look at directly in section 7, here I am only concerned with setting aside various preliminary obstacles. If you mean by intentionality only a directedness at the world that is meaningful then obviously you won’t consider the theory I will develop below a theory of intentionality. Instead you can view it as a kind of proto-intentionality, that when put together with a system that is conscious, as described in section 7, has meaning and is thus really intentionality.

3.2: Separating Intentionality From Reference

Another possible confusion is conflating intentionality with reference. Admittedly, how exactly we define reference is something that is up in the air at the moment; there are several theories about it that all deserve serious consideration. The problem with simply collapsing intentionality into reference is that reference may be defined in terms of the content of the external world. And thus identifying intentionality, part of the mind, with reference would then lead to a kind of externalism, which cannot be a viable materialist theory about the mind, as argued in section 2. And, if that isn’t enough, there is another reason to think that reference and intentionality are distinct. Simply consider the difference in reference between a brain in a vat living in a simulated world exactly like ours and us. The brain refers to virtual things, while we refer to real things. But our experiences are the same, and, more importantly, the brain and us have the same experience of intentionality, we both think we are directed at a world of real things. So, as far as a theory of consciousness is concerned, the brain in the vat’s intentionality and my intentionality are essentially the same; a theory of consciousness needs to explain our conscious experiences, which are the same, it doesn’t need to explain why we refer to what we do. Thus, for both of these reasons, we need to separate reference from intentionality in the context of a theory of consciousness. It is possible, perhaps likely, that they are related, that a combination of facts about our intentional state, the external world, and our relations to it determine reference. But as far as a theory of consciousness is concerned those connections are beside the point.

3.3: Separating Intentionality From the Experience of Intentionality

And intentionality must also be separated from the experience of intentionality, what it is like to be in an intentional mental state. I wouldn’t deny that there is an experience of intentionality, but the experience of intentionality is best treated as part of the discussion about qualia (section 4). To combine the two is to make the problem unnecessarily hard by asking the materialists to give a single solution to problems that can be addressed separately. And there is no reason to believe that intentionality and the experience of intentionality must go hand in hand. Certainly some people can see without having the experience of sight (blind sight), so it isn’t hard to imagine that it is possible that there may be intentionality without an accompanying experience of that intentionality (for example, if we taught someone with blind sight to navigate around a room using their unconscious sight it would seem as if they had a kind of intentional state concerning the room without being conscious of that state). In fact, systems without consciousness, and thus without experiences of any kind, might have a form of intentionality. Again, to decree that the two must go hand in hand would be to make an unjustified presupposition.

3.4: Why Intentionality Isn’t a Problem

But if we strip these confusions away intentionality no longer seems like much of a problem. Now admittedly some may use intentionality to describe a directedness at the world that is “meaningful”, or is part of a conscious experience. In that case what I am describing here is proto-intentionality, which can become “real” intentionality if it is part of a conscious system in the right way. As mentioned above we only need to show that proto-intentionality can be realized in a completely physical system; to argue that a physical system can’t be conscious because it can’t have “meaningful” or experiential intentionality is to beg the question. For the sake of simplicity I am going to call this proto-intentionality intentionality, and leave the debate as to whether it is really what we mean by the term “intentionality” for another time. Now, even if we don’t want to develop a detailed account of intentionally we can point out a number of completely physical systems that display evidence of intentionality. And thus, if these completely physical systems can have intentionality, clearly our brain, also hypothesized to be a completely physical system, could have intentionality. Which shows, without even developing a complete theory about it, that intentionality isn’t necessarily a problem for materialism. One such system that displays evidence of intentionality is the humble Electrolux Trilobite, a robotic vacuum cleaner. The Electrolux Trilobite displays intentionality (or so I claim) by not running into obstacles when it vacuums (although it seems highly unlikely that the robot has an experience of intentionality). This seems like good evidence that the robot’s “mind” is in some way about or directed at the room, at least enough that it doesn’t run into things. If this isn’t evidence that the robot possesses intentionality then what evidence do I have that other people possess intentionality?

But perhaps this plays too much on our intuitions about intentionality, and as we well know reality is not obligated to conform to our intuitions. To set aside these worries I will briefly outline one possible explanation of intentionality in purely materialist terms. Although I won’t guarantee that it is the right explanation at the very least it shows that plausible materialist explanations of intentionality do exist.

So, as a starting point, let me define intentionality directed at some object, say a tree, as some information in the system that encodes what kind of perceptual experiences the system might have of the object, as well as the behaviors that the system might engage in with that object, and the experiences that would result from these behaviors. This information then, in that system, is intentionally directed at that object. This seems promising, but there is an obvious problem with it, since we have defined what it means to be intentionally directed at something partially in terms of the object, which would be a kind of externalism.

To resolve this problem the object itself must be factored out of the definition. Consider the system by itself, not connected to a body, or even a world. The system still has inputs and outputs (perceptions and behaviors), but we have removed it from any possible context to give meaning to those inputs and outputs. Still, if we were given all the information about the system we could generate possible worlds it could be embedded in, which I will call world-models. This would be via a process of discovering what ideas were triggered by a given input and then discovering what kinds of inputs expects if it generates various outputs (how its perceptions will change if it behaves in certain ways). If we put all the different input-output correlations together we drastically narrow the possible world-models. But we will never be able to narrow it down to just one world-model. For example, if we were studying the brain in this way one model of the external world that would satisfy all the input-output correlations would be a world containing real objects, a world very close to the real one. But another, equally possible, world-model would be a world in which the system was running inside a simulation of the real world. And there might be even more abstract, but consistent, possible world-models as well, such as some unusual mathematical structure, in which inputs are equivalent some complicated matrix and outputs are different operations on those matrices. In each of these possible world-models there will be some object or objects that our intentional information is directed at (some features of the world-model that satisfy a particular group of input-output correlations). So let us say simply that this intentional data is directed at the set of those possible objects, which ultimately reduces to a very complicated connection between outputs, inputs, and the structure of the system. And this is a definition of intentionality that is completely independent of what is actually in the world.

Of course intentionality may be directed at non-perceptual things as well, and it may seem like the above account may exclude them, even though numbers are never a direct perceptual input we can still think about them, still be intentionally directed at them. However, I would say that this is simply understanding inputs too narrowly. In principle there is no reason that an “input” can’t come from another part of the mental system, say one that abstracts some feature from perceptual objects. For example, the intentional structure that is directed at numbers has as its inputs various mathematical objects and describes how those mathematical objects act as a result of being operated on mathematically. Naturally the mathematical inputs and outputs of this structure are themselves abstracted from the many possible perceptual inputs and behavioral outputs that are the vehicles with which we deal with numbers in the world, in contrast to the structure that corresponded to being intentionally directed at a tree, which did not deal with input abstractions, but rather direct perceptual inputs.

But, you might object, when I think about a tree my thoughts are directed at the tree, not some strange construct of input-output correlations. I freely grant that the experience of intentionality doesn’t seem this way, but, as I mentioned before, our experience of intentionality must be separated from intentionality itself. We might say that the experience of thinking about the tree has the qualia of being directed at a tree (a real tree), but that really the mind is only directed at the things that satisfy complex construct of input-output relationships, which a real tree would satisfy (among other possibilities). Of course the qualia, what it is like to be intentionally directed at a tree, needs to be explained as well, which brings me to the next section.

[footnotes omitted]

Advertisements

Blog at WordPress.com.

%d bloggers like this: