On Philosophy

October 4, 2007

Layers Of Intentionality

Filed under: Intentionality — Peter @ 12:00 am

What I have to say today about intentionality builds on a simpler conception of intentionality that I developed previously. Thus allow me to say a few words about it before I proceed. First I would claim that properly speaking we are not intentionally directed at objects in the world at all. Rather we are intentionally directed at object possibilities. An object possibility we can understand as a range of objects, bounded by some constraints. For example, if we have seen only the front of an object then the object possibility we are intentionally directed at is a range of possible objects all sharing that front, but having any reasonable backside (one we wouldn’t be shocked by). If this object possibility “fits” a real object to some degree then we might say that we are intentionally directed at that object, but that is a third person way of speaking. If the object possibility we were directed at did not “fit” any real objects from our point of the intentional directedness is the same, it is only from the third person point of view that someone might be inclined to say that we weren’t really intentionally directed at anything at all. And, finally, allow me to say what these object possibilities are defined in terms of. In the vast majority of cases they seem to be defined in terms of sensory input, and how that input would change in response to our interactions with the object or the object being affected by other forces. But we can allow for the possibility of intentionality directed at abstract entities, such as numbers, by allowing the input to also include ideas or concepts, the object possibility being defined by that input and how it changes as a result of abstract operations upon it.

This model of intentionality implies that each object possibility is defined over basically a single domain. A tree, for example, seems like it would be defined entirely in terms of sensory inputs. But consider what happens when you unknowingly encounter a fake tree. You believe it to be real, but regardless of whether it is an actual tree or not you are intentionally directed at it through the kinds of sensory impressions you can have about it. But one day you find out that the tree is not really a tree at all, but a fake. With this revelation something, something hard to define, happens. You see your previous intentional directedness at the fake tree as in error somehow. Although you realize that you were intentionally directed at the fake previously you will not be intentionally directed at in the same way in the future. Let me provide another example of this phenomenon at work. This time the tree is well, but one day it is uprooted and dragged somewhere else. As you walk by it you are intentionality directed at it, but as some unknown uprooted tree beside the road. But if someone tells you this was the tree that you had previously encountered you will again experience the way you are intentionally directed at it change. Your previous intentional directedness at it, as an unknown uprooted tree, will again be seen as in error somehow, although it was still directed at it, just not in the “right” way.

Similar phenomena can also occur in language and perception, which is no surprise since both reduce to intentionality in significant ways (meaning that explaining either requires some appeal to intentionality). In language, for example, we have the famous example of the martini drinking man, who is really drinking something else (and may not even be a man), but who appears that way to you. You can communicate successfully about him using those terms, even if your listeners know that he is not drinking a martini (probably). However, if you become aware that he isn’t really drinking a martini it will become apparent to you that somehow your intentional directedness at him was flawed. But in that case how were you successfully able to communicate? Similarly, in perception, you may experience a vivid hallucination of a purple rat. Your perceptions about this rat are caused by some internal glitch. Thus third parties may agree that what your thoughts about the purple rat are really about is the glitch. However, in terms of your intentional directedness at the purple rat, it is clear that if the rat doesn’t really exist then your intentional directedness is a failed directedness, one with no actual objects, from your perspective.

All of these cases can be explained in basically the same way, or so I claim. To account for them I would revise my original explanation of intentionality. In my original explanation I defined the object possibilities as being defined by a particular kind of input, for example sensory input. To this I would add that there are a number of “layers” to this possibility. Each layer is defined by a different kind of input. For example, in the case of our tree there are three layers that seem relevant. One is the perceptual layer, which defines the tree as having a particular appearance. Another is a conceptual layer, which identifies it as a tree, a concept that is tied to other concepts and perceptual inputs in ways that define it. And we might add to the conceptual layer other abstract properties we associate with the tree, such as location, or we might relegate them to their own layers. Finally we have the object layer, which defines the tree as a single object existing over time.

Not all of these layers are equally important. What we consider our own intentionality as actually being directed at, after we learn all the facts, depends on which layer or layers are the strongest. I am not going to commit myself to a claim about which layers are stronger and why at this time, but I will use the idea to explain the examples provided previously. Let us turn to those examples then. In the case where the tree turns out to be a fake the perceptual layer and the object layer hold, they are still directed at the fake tree, although the conceptual layer no longer matches up with it. However, the conceptual layer is weaker, and so we “fix” our intentionality by revising it. The fact that it doesn’t match the other layers is what causes the feeling that something has gone wrong. And the case with the uprooted tree is similar, although in this case it is the perceptual layer that is out of line with the others. The sensation that something has gone wrong intentionally is, in this case, caused by our revising of the perceptual layer and conceptual layers with which we were previously directed at the uprooted tree with a new object layer. In the case of the martini drinking man our language conveys the conceptual level, that the object of our intentions is a man and is drinking a martini. But our listeners, if they cannot find anyone to match to that conceptual level (and thus add a perceptual layer of intentional directedness of their own as well), will reconstruct our perceptual layer (looking for someone who looks like a martini drinking man) and intentionally directing themselves at that person. Finally, in the case of the purple rats our perceptual layer is directed at the glitch, because it is the source of those perceptions, but the object and conceptual layers fail to get a grip on anything, and so we consider our own intentional directness failed and without any real object, although there was something that satisfied part of it.

February 3, 2007

Information And Causation

Filed under: Information,Intentionality — Peter @ 12:00 am

Information, as defined by information theory, seems to ultimately reduce to the causal relations between a system and the objects it has information about. And if this was how information was necessarily defined it would pose a problem for informational functionalism, since it would make it a kind of causal functionalism. And causal functionalism isn’t a workable theory about the mind since it is an externalist theory.

The best response to this problem is not to reject information theory, but rather to realize that what it describes by the use of the word information and what informational functionalism describes by the use of the word information are not exactly the same thing. Let me illustrate the difference with an example. Consider then a piece of paper on the ground that says “some elephants are red”, which, unbeknownst to you, was generated by a random process. Does that paper convey any information to you? Information theory says no, that since there is no reliable causal channel between elephant color and the writing on the paper it cannot convey information to you. In contrast informational functionalism would say yes, that the paper conveys the information that elephants are red (assuming you can read), but that the information happens to be wrong.

So where is the difference? Well, what is meant by information in information theory is really what we would call reliable information in everyday discourse, or perhaps knowledge. For information to be reliable there must be a connection between the actual state of affairs and the information about that state of affairs. And so when there is no connection there is no information in this sense. But when we talk about information in the context of informational functionalism we certainly don’t mean reliable information. Much of the information that is part of a system may be inaccurate, distorted, or otherwise unreliable.

Obviously the kind information that informational functionalist theories refer to will have to be relativized to a system (which is why the paper only conveyed information about elephants if it could be read). So, as a starting point, let me assume then that we have a way of determining the intentional directedness of a system given all the facts about its operation (with the further assumption that we define intentionality in an internalist fashion). (I presented one theory about how this might be done in my thesis draft. Essentially the proposal is that what corresponds to intentionality in a system is an internal structure that encodes input-output correlations. For example, the internal structure that corresponds to being intentionally directed at a tree, or more specifically, the structure that is activated when a system is intending a tree, contains the inputs that the system would receive from perceiving a tree and how those inputs might change if the system took various actions. Of course intentionality may be directed at non-perceptual things as well. For example, the intentional structure that is directed at numbers has as its inputs various mathematical objects and describes how those mathematical objects act as a result of being operated on mathematically. Naturally the mathematical inputs and outputs of this structure are themselves abstracted from the many possible perceptual inputs and behavioral outputs that are the vehicles with which we deal with numbers in the world, in contrast to the structure that corresponded to being intentionally directed at a tree, which did not deal with input abstractions, but rather direct perceptual inputs.) Information then corresponds, roughly, to intentionality. To say that a system receives new information is to say that it becomes intentionally directed at something new. And the intentional habits (or abilities) that already exist in the system are the information it already contains.

For example, my mind contains information about unicorns because I have the capacity to be intentionally directed at them (well, at possible unicorns, since real unicorns don’t exist). Which means that I have a conception of unicorns in which they have certain properties (horse-like appearance, one horn); under almost all theories about intentionality this is what it means to be intentionally directed at something (conceiving of it as having certain properties). My mind also contains the information that unicorns don’t really exist. This isn’t so much a property of my being intentionally directed at unicorns, since when we think about anything we think of it as existing, but a property of by being intentionally directed at the real world, which contains the expectation that I won’t find any unicorns in it, or evidence that unicorns really exist.

So, back to the original example, the paper that states “some elephants are red” contains information, to a system that can understand the words, because when such a system reads the paper an intentional structure that is directed at elephants that have the potential to be red (or possibly at red elephants) is either created or brought to mind, and so we would say that the paper conveys that information to that system.

And so information can be defined in a way that is independent of causation, assuming that you accept a definition of intentionality that is independent of causation.

January 28, 2007

The Intentional Relation

Filed under: Intentionality — Peter @ 12:06 am

When we think about intentionality we think of the mental state as being directed at, or about, objects in the world, which implies that intentionality is a relation between mental states and objects. But if intentionality is a relation it is certainly an unusual one, because the intentional relation can also hold between mental states and imagined objects, objects that have no real counterparts. How is this a problem? Well, usually relations hold between one kind of thing and another kind of thing. For example, the mathematical relation “greater than” holds between numbers, and the relation “purchaser-of” holds between agents and things that can be bought. But it is hard to see what one kind of thing imaginary objects and real objects could both be described as. Another problem is determining when the intentional relation should hold. Normally a relation holds based on certain properties of its objects. But imagined objects and real objects have few, if any, properties in common. Real objects have real properties, and imagined objects have imagined properties. If the intentional relation holds because of certain real properties of the object then it can’t hold of imagined objects, and vice versa.

A solution taken by many to this problem is to assume that intentionality is simply relation-like, that it seems like a relation but really isn’t. Certainly this is an elegant solution to the problem, but it is also an extremely uninformative one. If intentionality isn’t a relation then what is it? Of course those who take this approach to intentionality rarely stop to describe exactly what it is, instead they describe what how it works, specifically that you are intentionally related to an object when that object somehow fits your mental picture of it. But this handles only the real world cases, and it even mishandles some of them, for example when you think about the martini drinking man, who, unbeknownst to you, is really drinking water (to pinch an example from Donnellan). And to resolve those problems we must invoke some principle of best fit, where objects that are sufficiently close to those being imagined count as fitting the description. But such fixes make intentionally relating to imaginary objects impossible, since they cannot “fit” the description, as they don’t actually exist*.

My solution to this dilemma is to re-cast intentionality as a relation between mental states and object possibilities (sets of possible objects). Specifically I would say that the mind is intentionally related to the possible object that fits the properties it is being conceived of as having (and this is why it is really a relation to a range of possibilities; since not every last detail is conceived of there are a number of possible objects that can fit the description). Of course I can’t ignore the fact that we often do talk about intentionality as though it related us to actual objects. I account for this as simply a way of talking about intentionality, which means that the possible object that the mind is intentionally related to happens to have a real world counterpart (I’ll leave the nature of the counterpart relation unspecified, although it probably involves some notion of “fit”).

The above theory about intentionality has a number of attractive properties. It makes intentionality depend only on internal factors, intentionality becomes a constructed fact (a description of the world instead of a causal part of it), and it explains the absence of a subjective difference between intending real and imaginary objects. Although, for those very reasons, I suspect externalists won’t like it.

* This is why dealing with reference is much easier than dealing with intentionality, you only have to deal with the actual.

January 23, 2007

Thesis Draft, Sections 3-4

Filed under: Intentionality,Mind — Peter @ 12:56 am

3: Intentionality

Intentionality, loosely speaking, is the ability of the mind (or parts of the mind) to be about (directed at) objects in the external world. At first glance this might not seem like a problem, since complex physical systems can do a number of interesting things, and that being intentionally related to objects in the world might simply be one more of those things. However, some claim that it material systems can’t be intentional, and thus to set these doubts aside we need to describe how a completely physical system could justifiably be said to be intentional.

3.1: Separating Intentionality From Reference

But before we do that we need to clear up some of the confusion surrounding intentionality. Intentionality, by its nature, is somewhat of an ill-defined notion. But before I can say what intentionality is I need to say what it is not. First of all intentionality is not (necessarily) reference. How exactly we define reference is something that is up in the air at the moment, there are several theories about it that all deserve serious consideration. The problem with simply collapsing intentionality into reference is that reference may be defined in terms of the content of the external world. And thus identifying intentionality, part of the mind, with reference would then lead to a kind of externalism, which cannot be a viable materialist theory about the mind, as argued in section 2. Thus we need to separate our theories of reference from our theories about intentionality. Of course the possibility that they will turn out to be the same thing cannot be ruled out either, but we can’t simply presuppose it.

3.2: Separating Intentionality From the Experience of Intentionality

And intentionality must also be separated from the experience of intentionality, what it is like to be in an intentional mental state. I wouldn’t deny that there is an experience of intentionality, but the experience of intentionality is best treated as part of the discussion about qualia (section 4). To combine the two is to make the problem unnecessarily hard by combining two tricky issues into one. And there is no reason to believe that intentionality and the experience of intentionality must go hand in hand. Certainly some people can see without having the experience of sight (blind sight), so it isn’t hard to imagine that it is possible that there may be intentionality without an accompanying experience of that intentionality (for example, if we taught someone with blind sight to navigate around a room using their unconscious sight it would seem as if they had a kind of intentional state concerning the room without being conscious of that state). In fact, systems without consciousness, and thus without experiences of any kind, might have a form of intentionality. Again, to decree that the two must go hand in hand would be to make an unjustified presupposition.

3.3: Why Intentionality Isn’t a Problem

But if we strip these confusions away intentionality no longer seems like much of a problem. Even if we don’t want to develop a detailed account of intentionally we can point out a number of completely physical systems that display evidence of intentionality. And if these completely physical systems can have intentionality clearly our brain, also hypothesized to be a completely physical system, could have intentionality, thus showing, without even developing a complete theory about it, that intentionality isn’t a problem for materialism. One such system that displays evidence of possessing intentionality is the humble Electrolux Trilobite, a robotic vacuum cleaner. The Electrolux Trilobite displays intentionality (or so I claim) by not running into obstacles when it vacuums (although it seems highly unlikely that the robot has an experience of intentionality). This seems like good evidence that the robot’s “mind” is in some way about or directed at the room, at least enough that it doesn’t run into things. If this isn’t evidence that the robot possesses intentionality then what evidence do I have that other people possess intentionality?

But perhaps this plays too much on our intuitions about intentionality, and as we well know reality is not obligated to conform to our expectations. To set aside these worries I will briefly outline one possible explanation of intentionality in purely materialist terms. Although I won’t guarantee that it is the right explanation at the very least it shows that plausible materialist explanations of intentionality do exist.

So, as a starting point, let me define intentionality directed at some object, say a tree, as some information in the system that encodes what kind of perceptual experiences the system might have of the object, as well as the behaviors that the system might engage in with that object, and the experiences that would result from these behaviors. This information then, in that system, is intentionally directed at that object. This seems promising, but there is an obvious problem with it, since we have defined what it means to be intentionally directed at something partially in terms of the object, which would be a kind of externalism.

To resolve this problem the object itself must be pulled out of the definition. Consider the system by itself, not connected to a body or even a world. The system still has inputs and outputs (perceptions and behaviors), but we have removed it from any possible context to give meaning to those inputs and outputs. Still, if we were given all the information about the system we could generate possible worlds it could be embedded in, which I will call world-models. This would be via a process of discovering what ideas were triggered by a given input and then discovering what kinds of inputs expects if it generates various outputs (how its perceptions will change if it behaves in certain ways). If we put all the different input-output correlations together we drastically narrow the possible world-models. But we will never be able to narrow it down to just one world-model. For example, if we were studying the brain in this way one model of the external world that would satisfy all the input-output correlations would be a world containing real objects, a world very close to the real one. But another, equally possible world-model, would be a world in which the system was running inside a simulation of the real world. And there might be even more abstract, but consistent, possible world-models as well, such as some unusual mathematical structure, in which inputs represent some complicated matrix and outputs are different operations. In each of these possible world-models there will be some object or objects that our intentional information is directed at (some features of the world-model that satisfy a particular group of input-output correlations). So let us say simply that this intentional data is directed at the set of those possible objects, which ultimately reduces to a very complicated connection between outputs, inputs, and the structure of the system, a definition completely independent of what is actually in the world.

But, you might object, when I think about a tree my thoughts are directed at the tree, not some strange construct of input-output correlations. I freely grant that the experience of intentionality doesn’t seem this way, but, as I mentioned before, our experience of intentionality must be separated from intentionality itself. We might say that the experience of thinking about the tree has the qualia of being directed at a tree (a real tree), but that really the mind is only directed at the things that satisfy complex construct of input-output relationships, which a real tree would satisfy (among other possibilities). Of course the qualia, what it is like to be intentionally directed at a tree, needs to be explained as well, which brings me to the next section.

4: Qualia

Qualia is a word invented to describe the subjective character of an experience. For example, when you see a red object you experience the qualia of that shade of red, which is not the same as the red light that enters your eyes, as one a feature of experience, a way of describing it, while the other is a part of the natural world. Obviously the two are in some way connected, but it is not obvious how. And by being inherently subjective qualia pose a large problem for materialist theories about the mind. Because they aren’t based on the physical facts (at least not obviously) it seems legitimate to take any materialist theory about the mind, say vision, and ask why red feels like red and not green to that system. Now we can give explanations of why it is said to be red, why that system will behave as though it were red, and possibly even form thoughts as if it was red, but what is wanted is an explanation of why it “feels” red.

4.1: Eliminating Qualia

By their nature qualia seem simply irreducible to physical facts about the system and they way it operates. Thus if materialism is to resolve the problem of qualia it must get rid of them. Of course naturally there will be those who object to this proposal. The first problem that it might create is that by getting rid of qualia we may be doing away with subjective experience altogether. And if we get rid of subjective experience we have gotten rid of consciousness, the very thing we want to explain. But I do not think that getting rid of qualia forces us to discard subjective experience, however I will have to ask that you simply assume that it doesn’t for the moment, since I have left the materialist treatment of consciousness, and subjective experience, for section 7, and I can’t explain how consciousness can still be said to remain without qualia without a theory about that consciousness. The second possible problem is that by eliminating qualia we might be eliminating something that we know exists indubitably, like an attempt to eliminate solidity, no matter what you say the fact that some objects are solid still remains. But in a way we do eliminate solidity, our ultimate explanations concerning the phenomenon contain only atoms and electromagnetic forces, and no fundamental quality of solidity. However, we don’t balk at these theories because they also explain how the macroscopic phenomenon we think of as solidity arises from these more primitive components. Likewise, the theory about qualia I outline below will eliminate them, but will also explain why a system might think and act as if there really were such things as qualia. Hopefully by explaining why we think qualia exist when in fact they don’t the theory will be less offensive to common sense, just as a theory about matter that doesn’t contain any reference to solidity is still acceptable if it explains why the objects we think of as solid don’t pass through each other.

4.2: A Brief Framework

But before I explain why we think that such things as qualia exist allow me to briefly explain what I think is really going on in the mind. Under materialist theories we can think of the mind as a system that receives inputs and produces outputs, a way of looking at minds that we have already used to explore intentionality. Let us specifically consider vision in one such system. This system receives an array (a two dimensional grid) of signals, each of which corresponds to some unique color. We can think of each of the raw signals as being a single letter inside the system (the letter stands for the signal itself, so there is no temptation to treat it as a number or some other complex construct). And the only processing the system can do on these letters is based on their position in the grid an on comparisons of them to each other (given two letters the system can tell how similar they are, although this is itself a primitive function, the workings of which are not exposed to the higher functions of the system). It is my hypothesis that this is all there is to the experience of sight (and in general all perception of the world), the processing of certain primitive signals, and thus by extension all it means when I say that I feel that I am experiencing red is that I am processing a certain signal or group of signals.

Obviously the human mind has a more elaborate visual system, we favor certain colors over others, and think certain combinations are more pleasing than others, but this doesn’t make the signals themselves any less primitive. We could work these complexities into the system by arguing that these ideas are either themselves generated by another basic kind of processing that is done (like the comparison of similarity between signal), or it is built on top of the comparison, in the sense that certain degrees of similarity are pleasing while others aren’t.

4.3: The Introspective Gap Solution

But this is not the complete explanation, as I mentioned in section 4.1 the above can only be an acceptable explanation (or elimination) of qualia if we can also explain why we think and act if there were such things as qualia. I propose that the reason that we have constructed these qualia is because the signals, and the comparisons between them, are simple to introspection, meaning that no amount of introspection can reveal their structure. When we think about our perception of a color introspection doesn’t tell us why the color is the color it appears to be, it just simply is that way. Red is red and green is green. Likewise we can’t say why two colors are similar on the basis of introspection. Of course we have constructed theories about color similarity, based on knowledge about how various colors appear when mixed, but these theories are not based in any way on our introspective awareness of color similarity, which only tells us how similar they are, not why.

So what does the mind do when some aspect of it is closed to introspection? Well in general minds could do anything, but human minds have a specific and well-documented response, they make things up. For example, if the mind can give us a solution to a problem without us consciously being aware of where that solution came from (because it was developed by the unconscious). If you ask a person in such a situation where that solution came from they will fabricate a story, a story that has nothing to do with how they really came to the solution. But they will believe that story, and be totally unaware of the fact that it is a fabrication. Similarly, if basically identical items are placed in a sequence people tend to prefer the rightmost one (we can show this by shuffling the items and proving that in each arrangement the rightmost is favored by a statistically significant number). This means that some people had no reason to pick that one except for its position. But when you asked people why they picked it every last one of them will have a story about why it is superior to the others, and will be totally unaware of the fact that their choice was determined largely because of the position. [yes, I will cite sources] I propose that qualia are another one of these made up stories. When we introspect on our experience we come to certain things that are un-analyzable by the mind. The mind doesn’t know why the signal is the way it is, so it makes up a story, like every other time introspection can’t reveal the answer. And that story is called the “feel”, we say it is red because it “feels” red. What does it mean for something to “feel” red, well we can’t say (except maybe to make analogies to other colors – it’s a bit like orange – but we then find ourselves unable to say exactly how it differs from orange – well it’s more red). I think this is good evidence to support the idea that the feel is a construction in response to a failure of introspection, and not a reflection of the real structure of the mind.

4.4: A Brief Detour Back To Intentionality

So, now that I have given a brief outline of a materialist theory of qualia, let me go back and completely wrap up the materialist theory of intentionality I had developed above. As you remember I said that it didn’t completely explain the experience of intentionality, but that the remaining work, explaining the feeling of a thought being about an object, could be explained by our theory about qualia. Now part of our experience of intending is expectations about perceptions. When I think about an apple I have expectations about what the apple will feel like, look like, and so on. And I also have a set of expectations about what will happen if I move in relation to the apple, if I bite into it, if I throw it, and so on. These reflect the input-output correlations that the theory claimed were the real basis of intentionality.

But there is more to the experience of intentionality than that. When I think about an apple I am not just thinking about those things, I also feel that I am thinking about an apple. It is natural to suppose that this is a qualia, and thus represents an introspective failure. But in the case of perception introspection failed when it attempted to analyze inputs that were basically un-analyzable, so what is introspection unable to uncover in the case of intentionality? I think it is the connection between all these expectations. Obviously the many and varied expectations we have about an object are connected to each other, otherwise they wouldn’t be all available to us when we needed to think about that particular object. I hypothesize that when we introspect on that connection our introspection fails, the connection just is there, without further details. But, as usual, introspection makes up a story for us, in this case that they have the “feel” of being about the same object.

4.5: Mary

Finally, I would like to address a thought experiment that brought qualia into focus as a problem for materialist theories about the mind, and show how this theory about qualia provides a satisfactory resolution to it. In the thought experiment we are asked to imagine a color-blind woman, named Mary, who studies color vision. By hypothesis Mary knows all the physical facts there are to know about color vision. One day Mary’s color-blindness is fixed by a new surgery. Now that she can see again doesn’t she learn something new, specifically what colors feel like? If this is true then clearly the physical facts don’t reveal all there is to know about the mind. [yes, I will cite the author of this version of the argument] With a single additional hypothesis the theory of qualia developed above can resolve this apparent problem. The hypothesis being that the primitive signal corresponding to a specific color (or color family, perhaps we can fill in missing shades by extrapolation, as Hume thought) can only be generated by either perceiving the color or recalling a perception of the color. This shouldn’t seem too unlikely, I hope, since if the signal really is un-analyzable it doesn’t seem like we should expect the mind to be able to construct it from nothing. So, when Mary gets color vision, she gets a new ability, to think about the color red in a way she couldn’t before. And when she reflects on this color her introspection will make up a story for her about the way it “feels”, a story her mind has never had to conjure up for her before about color. But neither of these is new knowledge, one is an ability and the other is false, and so the argument doesn’t show that there are facts about the experience of color that aren’t revealed by knowledge about the physical facts.

October 7, 2006

Intending

Filed under: Intentionality,Mind — Peter @ 11:58 pm

Let me begin describing what intending is by contrasting it to intentionality and representation. Representing is best described as a relation between a system and objects in the world. Because the representational relation involves both the system and the object, properly speaking it is part of neither, but rather a description of them both. Is intentionality representation? It certainly seems like a good fit, since we expect intentionality to be the connection between a mental act and its contents in the world, and representation can certainly fill that role. [1] The connection between the mind and the external world cannot be captured completely by representation, however. Certainly we experience our mental events are about objects, and not just about objects in general, but specific objects. To describe this feature of the mind I use the word intending. Perhaps this is not the best term, since it could be easily confused with intentionality, but since they are so similar I thought it best to give them similar names.

If intending is part of the mind clearly it can’t be a relation between the mind and the external world. [2] One way of describing intending is as a horizon of experiences that define our conception of any object, an idea developed by Husserl. [3] [3b] By horizon of experiences we mean that the idea, or act of an intending an object, is defined by a number of perceptions that we expect we could have about such an object. This is not to say that we could have all the perceptions in the horizon of an object at once, in almost all cases it would be impossible as the horizon includes perceptions of both the front and the back. The idea that a horizon of experiences defines our conception of an object might seem acceptable for unique objects, such as people, but how can we handle concepts such as dog, where there is not one way of looking like a dog? What we have to abandon is the naïve assumption that the horizon is defined by a number of perceptions all from different “points of view”, so to speak. There are a large number of possible visual perceptions even from the same “point of view” that are part of the horizon, for example in the case of the concept “dog” the horizon is made up of all possible perceptions of all possible things that could be a dog. [4] Even though a given horizon is composed of an infinite number of perceptions, each one picks out an object or set of objects from the vast number of all possibly experienced objects on the basis of those perceptions. Thus they make a good candidate for what we mean by intending; defining what intending a specific object is by appealing to a distinct horizon makes no reference to how things really are. By making use of horizons there is no need to posit a necessary connection between the act of intending and the objects in the external world, but at the same time we have a distinct notion of how one object can be intended by a thought act while another is not.

One objection that might be raised against the existence of such a horizon or its ability to be what is behind intending is that our physical minds don’t have the capacity to hold horizons as described here, since they are infinite in size. What this objection doesn’t take into account is that the description of horizons given above is not meant to describe how they are physically realized but how they are experienced and pin down an object or objects as being intended. As to how they might be realized physically consider the equation x > 2, In this equation we have defined x as an infinite number of possible values using only three symbols. Similarly the horizon for an object is likely defined by a small number of mental “rules”. We could have defined intending in terms of those rules instead of the horizon, since they are equivalent. I didn’t, however, because the exact form and nature of such mental rules is the subject of much debate, but the fact that they give rise to a horizon is not.

The last piece of the puzzle is how intending relates to representing. In many, if not most, situations intending and representing seem to overlap, where when one is intending something one is also representing it, and vice versa. There are, however, cases in which the two come apart. It is possible to represent objects that we aren’t or can’t be intending, for example when a patient with blind sight successfully reaches for objects they can’t see. They must be representing the object in order to direct their actions, but since they aren’t conscious of the objects it is equally apparent they don’t intend them. Intending also has the possibility to be directed at objects that don’t exist (for example thinking about unicorns), and to be mistaken (intending one object when the actual presented object is something else), while representation can be neither. The best way to describe intending then is as part of the conscious mind that is responsible for the ability to represent things [5] but that may be carried out even when representation is absent.

Notes:

1. More on representation / intentionality here

2. To say that it was would simply collapse it into representation, as well as to run up against the many problems facing externalism (see here).

3. Of course Husserl is somewhat obscure in places, so there are probably other readings of his ideas concerning the horizon, don’t cite me as an authority on Husserl, I’ll leave that job for David W Smith.

3b. And it turns out that while the ideas about intending presented here are certainly something Husserl might have agreed with it is definitely not what he meant by horizon. I told you not to cite me.

4. I should note that the horizon, which determines what we are intending, is not the all there is to thinking about some object. We often associate non-perceptual ideas with an object, for example, ownership, and the horizon does not capture these aspects of thought. However, the horizon does capture what matters for intending an object “in the world” which is what matters here.

5. Remember that representation is a correlation between property of the system and a feature in the world, and that the property of the system is responsible for activity as if that feature of the world is as it actually is. The horizon of an object is a conscious property that is correlated with the actual object (via perception), and that horizon informs our behavior and is responsible for it being properly directed at the world. Thus without intending consciousness wouldn’t be able said to represent anything, even though intending is not the same thing as representing and representing can be carried on without consciousness.

Next Page »

Blog at WordPress.com.