On Philosophy

September 30, 2006

Intentionality From a Systems View of the Mind

Filed under: Intentionality — Peter @ 12:51 am

Recently I have been considering the idea that it is only possible to determine if a system is conscious by examining how the state of the system in a given instant is connected to previous and subsequent moments, as well as the properties of the system at that moment (which are all some theories consider relevant for consciousness). This same approach can be used to examine intentionality, specifically to address the question “which systems have intentionality?”

When we ask that question we are specifically interested in which systems have a “primary” about-ness. We intuitively understand that there are many kinds of things that can be about other things, for example photographs are about their contents. However, many of the things we consider to have about-ness have “secondary” about-ness, meaning that they are only about the things that they represent because beings exist who interpret them as being about things. A photo of a dog is only about a dog because it invokes in us sensations similar to those that we have when we see a real dog. To beings with different methods of visualizing the world that very same photo wouldn’t be about anything at all, and their equivalent of photos wouldn’t be about anything to us. Thus the photo’s about-ness depends on other things. In contrast, when a person sees a dog their experience is about the dog no matter who or what else exists in the world, and so in an important sense this about-ness, which we call intentionality, is more important.

Just as when investigating consciousness, problems arise when attempting to find some criterion for an intentional relation in a specific instant. No arrangement of matter, it would seem, could be intentional because we could take an image of that arrangement, and, despite the fact that the image preserves all the relevant information, the image would not have intentionality. As with consciousness, the solution is to consider whole system, not just a specific instant of it. Fred Dretske, in his paper “A Recipe For Thought”, proposed that there were some systems, such as compasses, that possessed a primitive intentionality. The criterion for this primitive intentionality is that a property, P, of a system is about some feature of the world, C, if and only if the presence of P is usually caused by C. Thus a compasses’ needle might be “about” the direction of the North Pole, since its orientation is usually caused by the earth’s magnetic field.

There is one problem with this account, however, which is that it still seems like a person or other beings are still required to give the orientation of the compasses’ needle its meaning. Although the needle would still point north in a world devoid of people it seems possible that it wouldn’t be about the direction. To remedy this we add the criterion that not only must P usually be caused by C but that P must cause the system to act (including thought-acts) as if C. So a normal compass wouldn’t be about the North Pole, but a simple robot that was programmed to move in a northwards direction based on information from a compass would be. Of course behavior is how we usually determine whether a system has intentionality, since it is what we can most easily observe, but internal changes (thoughts) also count, as mentioned above, and so it is possible that some completely immobile systems have intentionality, although we might never know it.

This account fits perceptual intentionality well, but it still needs some work to cover all cases of “primary about-ness”. Most significantly we have thoughts about external objects that are about them, but these thoughts are not caused (directly) by those objects. Fortunately, the account we have been developing requires only a small change to account for this. Instead of requiring that C be the usual cause of P we instead require it to be the usual external cause of P, allowing P to be caused by internal processes as well (specifically whatever unconscious processes produce thoughts). And instead of requiring P to result in acts as if C we instead require P to result in acts appropriate to the mode of presentation as if C. This is simply a way of saying that when P is invoked because of perception one set of acts is expected to result, those appropriate for C really being there, while when P is invoked because of a “supposing” then another set of acts result, those as if possibly C, and so on for all the distinct ways in which P can be invoked.

Finally I should probably mention that this account of intentionality is not a form of externalism (although Dretske does develop his theory into an externalist account). This is because intentionality, as presented here, is not a part of the mind, instead it is a way that we can describe or talk about the mind. As far as the operation of the mind is concerned the cause of P is irrelevant (in the sense that the mind will have the same sequence of states, and the same consciousness, regardless of whether P at a particular moment is caused by C or by something else).


September 29, 2006

Zoning Out

Filed under: Mind — Peter @ 12:00 am

Many people have experienced a feeling of “zoning out” on long car drives, where they seem to drive for some time without realizing that they are driving. In some ways it feels as though the time in which you were driving while “zoned out” simply didn’t happen, as if you were unconscious.

There are three proposals as to what might be happening during these episodes: that the driving is being done by the unconscious mind, that you are conscious during this time but simply not forming memories, and that higher introspective consciousness is absent while perceptual consciousness is still present. The only one of these possibilities that I find implausible is that perceptual consciousness remains while introspective consciousness is absent (unless we construe it as simply another way of saying that memories aren’t being formed). It would seem that if we accept this suggestion, which is supported by arguments to the effect that we couldn’t drive without some consciousness of the road, it would seem necessary to grant perceptual consciousness to robot cars as well (cars that can drive themselves). Clearly robot cars aren’t conscious, and so if they can drive without consciousness so can we, in principle. Of course we could argue that the notion of consciousness that my judgments concerning robot cars rests on is that of introspective consciousness. This may be so, but to extend perceptual consciousness to simple automations is at best to misuse the word consciousness, and would make our understanding of introspective consciousness suspect as well.

As for the other two possibilities, that we drive unconsciously, or that we aren’t forming memories during that time, what is interesting about them is that we have no way of subjectively determining which is which. But consciousness is a subjective notion (by which I mean that the best judge of whether some is conscious is the person themselves), and so I would argue that these possibilities are essentially the same, that a period in which memories aren’t forming is simply a way of being unconscious. This plays into my theory, proposed previously, that a moment is conscious not only because of properties present at that moment but also because of how it is part of a temporally extended conscious system. Moments in which memories aren’t forming then can’t be considered conscious because they aren’t properly integrated into subsequent mental states.

Of course this raises the tricky question of how to think about times that we used to remember but simply have forgotten about. Our initial response might be to argue that somehow even those forgotten memories are integrated with us, but a much better solution is to abandon our naïve intuition to take a person’s entire life as a single conscious system. If we instead consider smaller increments, such as periods of continual consciousness, the problem disappears. It is true that you might forget the events of the morning (although most people can recall them in most situations), but even so they still have a distinctive causal effect on your mental states for the remainder of the day. We can’t make that argument for memories forgotten over the course of a lifetime, however, because the causal effects of a specific forgotten mental state have probably been overshadowed by the effects of the many many other mental states the system has been in, and thus it is hard to say how they could be considered to have a causal connection to the mind many years later. In fact we might even want to consider breaking up the day into multiple overlapping conscious systems, but that suggestion is getting far ahead of ourselves, so just consider it idle speculation for the moment.

Another possible problem is as follows: instead of “zoning out” as described here the person on the road continues to be conscious as normal, but when they snap back to reality all the memories of that time are lost (it has no causal effect upon them). This isn’t an actual possibility for what happens when people zone out on the road, but it is still something that we should consider. What should we say about this case? From the viewpoint of the person after “snapping back” those experiences aren’t conscious, since they aren’t part of their conscious system, but they are part of a different conscious system, so in that sense they are conscious, just conscious to a view point which the person after the fact has no knowledge of. The question this raises then is what right do we have to say that our supposedly “unconscious” mental activities are not part of some different conscious system? Surely this sounds strange, but what reason do we have for rejecting it? The answer leans on the fact that while our brains are complex they are not infinitely so. The way in which states are connected to each other over time allows some behaviors, such as language and reasoned deliberation, to arise. However, when people are unconscious they fail to display these complex behaviors (not even behaviors complex as those displayed by simple animals, such as solving puzzles). In some cases, for example in akinetic mutism, where patients are by all standards awake, but at the same time lacking consciousness. Now it is true that such behaviors could be displayed by unconscious systems (which is why the Turing test doesn’t infallibly detect consciousness), but such unconscious systems must be vastly more complex to act just like a conscious system without being conscious, more complex than our actual brains are. And so since we don’t display the abilities that our consciousness allows us to have during episodes of supposed unconsciousness we can safely say that they aren’t parts of some other conscious system.

September 28, 2006

A Brief Look at Materialist Mental Causation

Filed under: Mind — Peter @ 12:49 am

In most materialist theories (and non-materialist theories that wish to fit experimental evidence) it is supposed that somehow mental states or properties supervene on physical ones. Not all collections of physical properties have such supervening mental properties, but in the cases where the mental does supervene on the physical it does so necessarily (meaning that there are no physically possible worlds where that specific mental property or properties does not supervene on that collection of physical properties). This then raises the question: how do these mental properties causally affect the world. We know from experience that the mental does play a causal role, but supervenience seems to leave no room for it.

In this picture the Ms are mental properties and Ps the physical properties. The arrows upwards from the Ps to the Ms represent the supervenience relation, and the black horizontal arrow represents physical causation. The red and blue arrows are both possibilities for real mental causation, but in both cases it would seem physical causation and the supervenience relation have already determined matters. Let me simply assert that the red arrows are not possible causal links, and if you want the details as to why I suggest you read Jaegwon Kim’s excellent book Mind in a Physical World.

One way out of this dilemma is instantiation. Instead of appealing to a generic notion of supervenience we instead say that the physical properties instantiate mental properties. A visual representation of what instantiation is like is depicted below:

In this picture the physical properties, the Ps, instantiate a mental property M. The same mental property may be instantiated by different collections of physical properties (represented by the different colors of the Ps), and if a collection of physical properties instantiates some mental properties there are no physically possible worlds in which it doesn’t instantiate those mental properties (as under supervenience). Strictly speaking we might say that under this view the mental properties, as independent entities, don’t exist. However, they are certainly valid descriptions of similarities between different physical realizers, and so they do exist as valid abstractions. Under this view mental causation doesn’t pose a significant problem.

Because the mental properties are no longer something over and above the physical mental causation simply is physical causation. As the he mental is in a sense “made of” the physical it has causal powers because its physical realizers have causal powers, and there is no longer a need for additional mental causation.

There is a second, less talked about, problem with mental causation under a materialist view of the mind. The problem arises because we think that there can be many different physical realizers of a mental state. Because their physical properties are different these realizers will be the cause of different subsequent collections of physical properties, which is as we expect. However, there is no guarantee, under many materialist theories, that these subsequent physical properties will realize the same mental properties. This situation is depicted below.

If this is true then it is hard to say how there could be mental laws at all. Given any mental state there could be an infinite number of possible successor states, which would make the mental realm definitely strange, in a way that we should find unacceptable (because it conflicts with experience). What we need to do is find a materialist theory that permits one mental state to be followed only by some limited subset of the possible mental states under normal circumstances. One solution is functionalism, which states that a collection of physical properties realizes mental properties only when it has the correct kind of causal powers, specifically those that result in the right kind of subsequent mental states, basically building the requirement that some kind of regularities in the succession of mental states right in. Another approach is to look at the whole system, extended through time, as what can be conscious. The fact that one state is succeeded by another then becomes an additional property that can determine if the system is instantiating mental properties, again making only certain sequences of mental states possible if mental properties are to be instantiated at all.

September 27, 2006

Necessary Parts of Consciousness

Filed under: Mind — Peter @ 1:29 am

If we are to explain consciousness in terms of some system or process (for the first steps towards such an explanation see here) what ate the essential properties of consciousness that our theory must account for? (And by account for I mean that they are not built into the explanation as fundamental constituents of consciousness, but instead arise as necessary properties of the systems that our theory picks out as conscious.) Ideally these essential properties are essential not only in the sense that all conscious systems must have them but also in the sense that by having these properties a system is by definition conscious (thus showing that our proposed theory really does capture consciousness). Without further ado then here is the short list:

First Person Perspective
The first person perspective can be defined as that which is responsible for a conscious being’s experiences (see next item) as being presented as belonging to them. Moreover, the first person perspective has a kind of temporal extension (or is at least felt to have a temporal extension), in the sense that the remembered experiences are remembered as being presented to this same first person perspective. Another way of putting this idea would be to say that the information that is part of the conscious system is structured in terms of how it relates to this perspective (which is itself constructed by the fact that conscious experiences and information are related to it). Of course my use of structured may itself seem a little vague. In some cases, such as a thinking about positions in space, this structure is simply the fact that they are presented in terms of the first person perspective (where things are in relation to me, or to where I place my mind’s eye when I think of them), and in other cases it may be as simple as knowing that they thought “2 + 2 = 4” is my thought, even though the mathematical fact itself isn’t dependant on my perspective in any way. I will omit arguments as to why the first person perspective is essential to consciousness, since it has long been held to be the defining feature of consciousness.

Another essential part of consciousness is experiences. Experiences, as I define them here, include not just sensory impressions, but mental activities that are consciously “experienced” as well, such as thinking, remembering, desiring, ect. The existence of a first person perspective and experiences are obviously very closely related, since in order to have a first person perspective there must by experiences that can be related to that perspective, and we might define some state as an experience only if there is a first person perspective it can be related to. However, I have separated them here, simply because an account of consciousness may give different accounts of them (for example: experiences may be generated by one process, the first person perspective by a second, and then a third may bind them together). Experiences, in theory, should also account for “qualia” (which is partly how experiences are presented) and the perceived unity of consciousness. Again, there is no real need to defend experience as necessary for consciousness, since without experiences there would be nothing to be conscious of.

Finally we come to self-consciousness. Sometimes self-consciousness is considered to be part of, or responsible for, the first person perspective (or vice versa). However I separate them, because I use self-consciousness to mean information about the system itself that becomes part of the experience. Obviously though this self-information works side-by-side with the properties of the experience that make it experienced from a first person perspective. Of components that I have pointed out as necessary for consciousness this self-information may seem the least important. For example, people with amnesia seem just as conscious as us, and clearly they have less of this self-information embedded into their experiences. However, this self-information can be seen as responsible for the existence of chains of thoughts (since each thought experience must contain some experiences about the previous thought experiences, information about the system, in order to build upon them in a rational manner). In addition this self-consciousness seems required to have a first person perspective with “temporal extension” since information about the usual form of the first person perspective, that it is like the current first person perspective, and that it as existed for some time would seem to make their way into experience via this self-information, and hence self-consciousness. However, the amount of self-consciousness required for a system to be conscious is probably minimal (much less than found in people), since we think that some animals may be conscious, and surely their powers of self-consciousness are less developed than ours.

Strictly speaking intentionality is not required for consciousness. We can conceive of conscious beings who have experiences that relate to nothing in the world (or who conceptualize those experiences in ways that don’t relate to the world). However, such conscious beings would not be recognizable as conscious to us; we wouldn’t know how to communicate with them or recognize them from their behavior or mental activity (since none of it would be world directed). Thus an explanation of intentionality should probably be part of our explanation of consciousness, or at least an easy addition to our theory of consciousness, given that the conscious beings we are theorizing about all possess some form of intentionality. Perhaps I am simply a little biased though, since the approach to consciousness that I am currently working with has the ability to provide a decent explanation of what makes a system intentional when turned to that problem.

September 26, 2006

Is Philosophy a Confused Discipline?

Filed under: Metaphilosophy — Peter @ 12:26 am

This may seem like a strange question, but perhaps not an unreasonable one considering that there are two different approaches to philosophical questions, and it is easy for us to confuse one for the other (aided by the fact that philosophers often fail to explicitly state which approach they prefer). One approach is to view philosophical questions as about our concepts and the other is to see them as about the world, and answers to given by these approaches can be as different as night and day.

Personally I approach philosophical questions as about the world. I do see our concepts as interesting, and worthy of consideration, but I also feel that they are best studied by psychologists. One reason to engage in philosophy in this fashion is because the answers, if correct, reveal truths about the world, independent of people and their ideas. The concepts people have change over time but the structure of the natural world does not (to the best of our knowledge), and so by working in this way we can hope that our conclusions will have lasting significance. Another reason not to approach philosophical questions as about concepts is that many of our concepts have been shown to be paradoxical, possibly even all of the philosophically interesting ones (our concept of identity over time is the most easily shown to be paradoxical). Such inconsistencies are even more likely to be found when we attempt to reconcile concepts as employed by different people. The natural world, however, does not contain paradoxes (again, to the best of our knowledge), and so while working with concepts can problematic studying the world is much less so, since when our concepts conflict we can simply reject them as wrong.

Of course approaching philosophical question as about concepts has some points in its favor as well. For one we might believe that a study of our concepts must be prior to any other kind of investigation, and thus that we must settle disputes about our concepts themselves before we move on to examine the rest of the world (Husserl could be read as taking this position). We might also be motivated to approach philosophical questions in this way because we feel that our concepts play a central role in our life, and that by examining them we can resolve “human problems”. Even though our solutions might not be good for all eternity they can still be useful to us now. Finally, we might feel that answering questions about the world is the domain of science, and that philosophizing in that realm is an invitation to look foolish when confronted by later evidence.

So perhaps both approaches have their merits (even though I prefer by far taking philosophical questions as about the world). What is a problem, as mentioned earlier, is when they are mixed together, specifically when philosophers tackling the same questions and publishing in the same journals take different approaches. This is a particularly severe problem in my area of interest, the philosophy of mind (of course epistemology, metaphysics, ect suffer from this problem as well). Some approach the study of consciousness as the study of the concept of consciousness, which, I will admit, is that consciousness is independent of the body, ect. Others, myself included, study consciousness as something that is part of the world, meaning that our intuitions, and pre-analytic conception of it, may be drastically wrong. But of course philosophers don’t label themselves as taking one of these approaches, and so the arguments of one approach are used in an attempt to rebut the position of the other approach (for example, my previous post shows how the argument that consciousness is conceptually independent of the physical world fails when we are interested in the consciousness that exists here and now).

Next Page »

Create a free website or blog at WordPress.com.