Intentionality, loosely speaking, is the ability of the mind (or parts of the mind) to be about (directed at) objects in the external world. At first glance this might not seem like a problem, since complex physical systems can do a number of interesting things, and that being intentionally related to objects in the world might simply be one more of those things. However, some claim that it material systems can’t be intentional, and thus to set these doubts aside we need to describe how a completely physical system could justifiably be said to be intentional.
3.1: Separating Intentionality From Reference
But before we do that we need to clear up some of the confusion surrounding intentionality. Intentionality, by its nature, is somewhat of an ill-defined notion. But before I can say what intentionality is I need to say what it is not. First of all intentionality is not (necessarily) reference. How exactly we define reference is something that is up in the air at the moment, there are several theories about it that all deserve serious consideration. The problem with simply collapsing intentionality into reference is that reference may be defined in terms of the content of the external world. And thus identifying intentionality, part of the mind, with reference would then lead to a kind of externalism, which cannot be a viable materialist theory about the mind, as argued in section 2. Thus we need to separate our theories of reference from our theories about intentionality. Of course the possibility that they will turn out to be the same thing cannot be ruled out either, but we can’t simply presuppose it.
3.2: Separating Intentionality From the Experience of Intentionality
And intentionality must also be separated from the experience of intentionality, what it is like to be in an intentional mental state. I wouldn’t deny that there is an experience of intentionality, but the experience of intentionality is best treated as part of the discussion about qualia (section 4). To combine the two is to make the problem unnecessarily hard by combining two tricky issues into one. And there is no reason to believe that intentionality and the experience of intentionality must go hand in hand. Certainly some people can see without having the experience of sight (blind sight), so it isn’t hard to imagine that it is possible that there may be intentionality without an accompanying experience of that intentionality (for example, if we taught someone with blind sight to navigate around a room using their unconscious sight it would seem as if they had a kind of intentional state concerning the room without being conscious of that state). In fact, systems without consciousness, and thus without experiences of any kind, might have a form of intentionality. Again, to decree that the two must go hand in hand would be to make an unjustified presupposition.
3.3: Why Intentionality Isn’t a Problem
But if we strip these confusions away intentionality no longer seems like much of a problem. Even if we don’t want to develop a detailed account of intentionally we can point out a number of completely physical systems that display evidence of intentionality. And if these completely physical systems can have intentionality clearly our brain, also hypothesized to be a completely physical system, could have intentionality, thus showing, without even developing a complete theory about it, that intentionality isn’t a problem for materialism. One such system that displays evidence of possessing intentionality is the humble Electrolux Trilobite, a robotic vacuum cleaner. The Electrolux Trilobite displays intentionality (or so I claim) by not running into obstacles when it vacuums (although it seems highly unlikely that the robot has an experience of intentionality). This seems like good evidence that the robot’s “mind” is in some way about or directed at the room, at least enough that it doesn’t run into things. If this isn’t evidence that the robot possesses intentionality then what evidence do I have that other people possess intentionality?
But perhaps this plays too much on our intuitions about intentionality, and as we well know reality is not obligated to conform to our expectations. To set aside these worries I will briefly outline one possible explanation of intentionality in purely materialist terms. Although I won’t guarantee that it is the right explanation at the very least it shows that plausible materialist explanations of intentionality do exist.
So, as a starting point, let me define intentionality directed at some object, say a tree, as some information in the system that encodes what kind of perceptual experiences the system might have of the object, as well as the behaviors that the system might engage in with that object, and the experiences that would result from these behaviors. This information then, in that system, is intentionally directed at that object. This seems promising, but there is an obvious problem with it, since we have defined what it means to be intentionally directed at something partially in terms of the object, which would be a kind of externalism.
To resolve this problem the object itself must be pulled out of the definition. Consider the system by itself, not connected to a body or even a world. The system still has inputs and outputs (perceptions and behaviors), but we have removed it from any possible context to give meaning to those inputs and outputs. Still, if we were given all the information about the system we could generate possible worlds it could be embedded in, which I will call world-models. This would be via a process of discovering what ideas were triggered by a given input and then discovering what kinds of inputs expects if it generates various outputs (how its perceptions will change if it behaves in certain ways). If we put all the different input-output correlations together we drastically narrow the possible world-models. But we will never be able to narrow it down to just one world-model. For example, if we were studying the brain in this way one model of the external world that would satisfy all the input-output correlations would be a world containing real objects, a world very close to the real one. But another, equally possible world-model, would be a world in which the system was running inside a simulation of the real world. And there might be even more abstract, but consistent, possible world-models as well, such as some unusual mathematical structure, in which inputs represent some complicated matrix and outputs are different operations. In each of these possible world-models there will be some object or objects that our intentional information is directed at (some features of the world-model that satisfy a particular group of input-output correlations). So let us say simply that this intentional data is directed at the set of those possible objects, which ultimately reduces to a very complicated connection between outputs, inputs, and the structure of the system, a definition completely independent of what is actually in the world.
But, you might object, when I think about a tree my thoughts are directed at the tree, not some strange construct of input-output correlations. I freely grant that the experience of intentionality doesn’t seem this way, but, as I mentioned before, our experience of intentionality must be separated from intentionality itself. We might say that the experience of thinking about the tree has the qualia of being directed at a tree (a real tree), but that really the mind is only directed at the things that satisfy complex construct of input-output relationships, which a real tree would satisfy (among other possibilities). Of course the qualia, what it is like to be intentionally directed at a tree, needs to be explained as well, which brings me to the next section.
Qualia is a word invented to describe the subjective character of an experience. For example, when you see a red object you experience the qualia of that shade of red, which is not the same as the red light that enters your eyes, as one a feature of experience, a way of describing it, while the other is a part of the natural world. Obviously the two are in some way connected, but it is not obvious how. And by being inherently subjective qualia pose a large problem for materialist theories about the mind. Because they aren’t based on the physical facts (at least not obviously) it seems legitimate to take any materialist theory about the mind, say vision, and ask why red feels like red and not green to that system. Now we can give explanations of why it is said to be red, why that system will behave as though it were red, and possibly even form thoughts as if it was red, but what is wanted is an explanation of why it “feels” red.
4.1: Eliminating Qualia
By their nature qualia seem simply irreducible to physical facts about the system and they way it operates. Thus if materialism is to resolve the problem of qualia it must get rid of them. Of course naturally there will be those who object to this proposal. The first problem that it might create is that by getting rid of qualia we may be doing away with subjective experience altogether. And if we get rid of subjective experience we have gotten rid of consciousness, the very thing we want to explain. But I do not think that getting rid of qualia forces us to discard subjective experience, however I will have to ask that you simply assume that it doesn’t for the moment, since I have left the materialist treatment of consciousness, and subjective experience, for section 7, and I can’t explain how consciousness can still be said to remain without qualia without a theory about that consciousness. The second possible problem is that by eliminating qualia we might be eliminating something that we know exists indubitably, like an attempt to eliminate solidity, no matter what you say the fact that some objects are solid still remains. But in a way we do eliminate solidity, our ultimate explanations concerning the phenomenon contain only atoms and electromagnetic forces, and no fundamental quality of solidity. However, we don’t balk at these theories because they also explain how the macroscopic phenomenon we think of as solidity arises from these more primitive components. Likewise, the theory about qualia I outline below will eliminate them, but will also explain why a system might think and act as if there really were such things as qualia. Hopefully by explaining why we think qualia exist when in fact they don’t the theory will be less offensive to common sense, just as a theory about matter that doesn’t contain any reference to solidity is still acceptable if it explains why the objects we think of as solid don’t pass through each other.
4.2: A Brief Framework
But before I explain why we think that such things as qualia exist allow me to briefly explain what I think is really going on in the mind. Under materialist theories we can think of the mind as a system that receives inputs and produces outputs, a way of looking at minds that we have already used to explore intentionality. Let us specifically consider vision in one such system. This system receives an array (a two dimensional grid) of signals, each of which corresponds to some unique color. We can think of each of the raw signals as being a single letter inside the system (the letter stands for the signal itself, so there is no temptation to treat it as a number or some other complex construct). And the only processing the system can do on these letters is based on their position in the grid an on comparisons of them to each other (given two letters the system can tell how similar they are, although this is itself a primitive function, the workings of which are not exposed to the higher functions of the system). It is my hypothesis that this is all there is to the experience of sight (and in general all perception of the world), the processing of certain primitive signals, and thus by extension all it means when I say that I feel that I am experiencing red is that I am processing a certain signal or group of signals.
Obviously the human mind has a more elaborate visual system, we favor certain colors over others, and think certain combinations are more pleasing than others, but this doesn’t make the signals themselves any less primitive. We could work these complexities into the system by arguing that these ideas are either themselves generated by another basic kind of processing that is done (like the comparison of similarity between signal), or it is built on top of the comparison, in the sense that certain degrees of similarity are pleasing while others aren’t.
4.3: The Introspective Gap Solution
But this is not the complete explanation, as I mentioned in section 4.1 the above can only be an acceptable explanation (or elimination) of qualia if we can also explain why we think and act if there were such things as qualia. I propose that the reason that we have constructed these qualia is because the signals, and the comparisons between them, are simple to introspection, meaning that no amount of introspection can reveal their structure. When we think about our perception of a color introspection doesn’t tell us why the color is the color it appears to be, it just simply is that way. Red is red and green is green. Likewise we can’t say why two colors are similar on the basis of introspection. Of course we have constructed theories about color similarity, based on knowledge about how various colors appear when mixed, but these theories are not based in any way on our introspective awareness of color similarity, which only tells us how similar they are, not why.
So what does the mind do when some aspect of it is closed to introspection? Well in general minds could do anything, but human minds have a specific and well-documented response, they make things up. For example, if the mind can give us a solution to a problem without us consciously being aware of where that solution came from (because it was developed by the unconscious). If you ask a person in such a situation where that solution came from they will fabricate a story, a story that has nothing to do with how they really came to the solution. But they will believe that story, and be totally unaware of the fact that it is a fabrication. Similarly, if basically identical items are placed in a sequence people tend to prefer the rightmost one (we can show this by shuffling the items and proving that in each arrangement the rightmost is favored by a statistically significant number). This means that some people had no reason to pick that one except for its position. But when you asked people why they picked it every last one of them will have a story about why it is superior to the others, and will be totally unaware of the fact that their choice was determined largely because of the position. [yes, I will cite sources] I propose that qualia are another one of these made up stories. When we introspect on our experience we come to certain things that are un-analyzable by the mind. The mind doesn’t know why the signal is the way it is, so it makes up a story, like every other time introspection can’t reveal the answer. And that story is called the “feel”, we say it is red because it “feels” red. What does it mean for something to “feel” red, well we can’t say (except maybe to make analogies to other colors – it’s a bit like orange – but we then find ourselves unable to say exactly how it differs from orange – well it’s more red). I think this is good evidence to support the idea that the feel is a construction in response to a failure of introspection, and not a reflection of the real structure of the mind.
4.4: A Brief Detour Back To Intentionality
So, now that I have given a brief outline of a materialist theory of qualia, let me go back and completely wrap up the materialist theory of intentionality I had developed above. As you remember I said that it didn’t completely explain the experience of intentionality, but that the remaining work, explaining the feeling of a thought being about an object, could be explained by our theory about qualia. Now part of our experience of intending is expectations about perceptions. When I think about an apple I have expectations about what the apple will feel like, look like, and so on. And I also have a set of expectations about what will happen if I move in relation to the apple, if I bite into it, if I throw it, and so on. These reflect the input-output correlations that the theory claimed were the real basis of intentionality.
But there is more to the experience of intentionality than that. When I think about an apple I am not just thinking about those things, I also feel that I am thinking about an apple. It is natural to suppose that this is a qualia, and thus represents an introspective failure. But in the case of perception introspection failed when it attempted to analyze inputs that were basically un-analyzable, so what is introspection unable to uncover in the case of intentionality? I think it is the connection between all these expectations. Obviously the many and varied expectations we have about an object are connected to each other, otherwise they wouldn’t be all available to us when we needed to think about that particular object. I hypothesize that when we introspect on that connection our introspection fails, the connection just is there, without further details. But, as usual, introspection makes up a story for us, in this case that they have the “feel” of being about the same object.
Finally, I would like to address a thought experiment that brought qualia into focus as a problem for materialist theories about the mind, and show how this theory about qualia provides a satisfactory resolution to it. In the thought experiment we are asked to imagine a color-blind woman, named Mary, who studies color vision. By hypothesis Mary knows all the physical facts there are to know about color vision. One day Mary’s color-blindness is fixed by a new surgery. Now that she can see again doesn’t she learn something new, specifically what colors feel like? If this is true then clearly the physical facts don’t reveal all there is to know about the mind. [yes, I will cite the author of this version of the argument] With a single additional hypothesis the theory of qualia developed above can resolve this apparent problem. The hypothesis being that the primitive signal corresponding to a specific color (or color family, perhaps we can fill in missing shades by extrapolation, as Hume thought) can only be generated by either perceiving the color or recalling a perception of the color. This shouldn’t seem too unlikely, I hope, since if the signal really is un-analyzable it doesn’t seem like we should expect the mind to be able to construct it from nothing. So, when Mary gets color vision, she gets a new ability, to think about the color red in a way she couldn’t before. And when she reflects on this color her introspection will make up a story for her about the way it “feels”, a story her mind has never had to conjure up for her before about color. But neither of these is new knowledge, one is an ability and the other is false, and so the argument doesn’t show that there are facts about the experience of color that aren’t revealed by knowledge about the physical facts.