On Philosophy

September 12, 2006

The Murder of the Self

Filed under: Self — Peter @ 12:03 am

A common question that comes up when thinking about personal identity is: what makes the person I am now the same as the person who was me some time in the past? Clearly we aren’t identical, many things change over time, but we do want to claim that there is some “personhood” that is shared by us.

A common candidate for the property of “personhood” is psychological continuity. The me now and the me yesterday share many of the same desires, attitudes, memories (including new memories of being me yesterday), ect. Dualists would reject this view of “personhood”, of course, but fortunately we are not dualists, for reasons I have outlined elsewhere. The only other alternative would be to subscribe to some kind of physical continuity as defining who we are, but since our cells, and the proteins that constitute them, are constantly in a process of being replaced, without any noticeable effect on our “personhood”, I think we can safely rule it out as an alternative.

Two immediate consequences spring from defining personal identity in this way. One is that two people-instants (a person at a particular point in time) are not simply the same person or not. Psychological continuity comes in degrees, and thus our judgments as to whether two people-instants are the same person must as well. The me now and the me yesterday are the same person to a high degree, while the me now and the me several years ago are the same person to some lesser degree. The second consequence is that it is possible to kill someone by preventing them from sharing a psychological continuity with future people, even if their body lives on. For example, if we had a machine that replaced a person’s desires, attitudes, memories, ect with those of another person to use the machine on someone would be to kill them.

Now let’s consider a slight modification the person-replacing machine. Instead of replacing the person’s psychological attributes all at once it transforms their existing ones into those of another person over some short period of time. Clearly this process is equally destructive to the original person. But, as the title of this post foreshadowed, this is just what happens in ordinary life, since we change over time. For some people there is likely a point in their lives where the only thing they have in common with the “them” at some distant earlier point in their lives are perhaps some vague memories of that time. And, even if these vague memories provide some psychological continuity, it is no more than I have with someone else who, let us say, shares some of my attitudes and goals. And if the psychological continuity I share with the stranger isn’t enough to make us the same person then clearly the vague memories we might share with our earlier selves aren’t enough either.

Have we then killed our earlier selves? (Or perhaps our earlier selves committed suicide, since they didn’t object to the changes that turned them into us.) And even if we have it certainly wouldn’t make sense to fight this process, since that would force us to lock ourselves into the same attitude, opinions, and desires that we have now, definitely not a healthy idea. Now some might object to this conclusion, claiming that because there were intermediate stages that shared the necessary psychological continuity with our earlier selves then somehow our “personhood” has been transmitted intact through them. If this were the case then we shouldn’t resist having the person-replacing machine, modified to transform us over the space of an hour, used on us, since in that case too there are intermediate stages to preserve our “personhood”. Of course one might still resist if they disliked the person it changed you into, so let me stipulate that the person the machine changes people into is something that everyone would desire, and in fact strive for (intelligent, wise, sensitive, moral, ect). Even so, I think most people would resist having the machine used on them, because they feel that in some way it would eliminate the person who they are now, and if this is the correct interpretation then my original reading of psychological continuity holds, and thus the people some of us were in the past really have been replaced by new people. I don’t know exactly how we should read the moral implications of this, so I will leave them as an exercise for the reader.

August 29, 2006

Consciousness, a Different Approach

Filed under: Mind,Self — Peter @ 12:02 am

There are many competing theories that attempt to explain what special features of some mental states make them conscious. There are higher order theories, global sates theories, and self-referential theories. In these theories various ways of structuring mental states are considered, with the idea that the proposed structure explains the essential features of conscious experience. What such theories don’t address, however, is why such organizational features result in consciousness. Many of the proposed theories are simple enough that we could implement them in a computer, for example higher order theory is as simple as one mental state being directed at another. But even if we did the computer wouldn’t be conscious. So then, what really causes consciousness?

I will begin my investigation here by pointing out that no state in isolation could be considered conscious. If we took a snapshot of your brain that snapshot wouldn’t be conscious, even though all the information about that particular moment has been captured. In other words it only makes sense to talk about a conscious state when the state in question is part of a conscious system. Thus I think that we should investigate consciousness by first examining what features a system, one that exists for an extended period of time, must have to be considered conscious. A conscious state then would simply be one state of this system; we call it conscious only in virtue of being part of this larger system, not because of any intrinsic features.

So then, what are the defining features of a conscious system? There are three key elements. One is that it must to have a “self”, a store of information about the system itself, and this information must be modified over time to reflect the experiences of the system. Secondly it must have experiences, experiences that contain information about the external world (perception), information generated by mental processes (thoughts, decisions, memories, imaginings, ect), and information about the self. Finally, the current “self” and “experience” must be causally connected to future “self” and “experience” states in the correct way. Future “self” states must be dominated by the previous “self” state (the continuity of identity over time), but experience should be able to modify the self in predictable ways, though the acquisition of new concepts, memories, desires, and goals (the thought portion of experience is obviously key to this). Future “experience” states are dominated by previous experience states, for example the content of a thought (and perception at the moment) strongly influences the content of subsequent thoughts. Obviously I am not attempting to give a complete description of conscious systems, but I would like to point out four particular contributors to the experience state. First the decision of what to pay attention to in pervious experience states controls what information (i.e. external experiences, internal thoughts, ect) dominates the experience (which only has finite capacity), although unconscious processes due influence attention, occasionally yanking it in unexpected directions. Secondly each experience state contains “echos” or “remnants” of previous experience states, which accounts for our perception of time. Thirdly the information contained in each experience state is “formatted” or structures by the information contained in the self, for example our perception is pre-fitted into conceptual categories without conscious effort. Fourth the “self” state contributes some information, accounting for self-awareness. These three factors, the self, experience, and their causal connections, are sufficient for a system to be conscious.

Obviously though consciousness is not the only part of the mind, there is also the unconscious, which is certainly an important contributor to the way we think and make decisions. I won’t go into too much detail about the unconscious, but I will say that the role it plays in this account is in controlling the way in which one “conscious” mental state becomes another over time. For example a thought may be based on previous thoughts, but what exactly determines its content is an unconscious process. The same thing can be said about decisions, concept formation, ect. See also my account of beliefs as unconscious here.

Another feature of the mind to consider is introspection, which us is handled in two ways by this account. One type of introspection is attempting to reflect upon one’s self-awareness. In this case I would say that our attention becomes focused on the self-information that is part of every experience, causing that information to dominate the experience. There is another kind of introspection, where we attempt to reflect upon our real reasons for acting, or our real beliefs, which are unconscious. Experiments have shown that such introspection is unreliable, and I would say this is because the content of experience in this case is a fictional account, which is based on our past experiences and not on the actual workings of the unconscious.

To conclude allow me to show why such a system should be considered conscious. Note that it is the system, allowed to exist for some extent of time that is to be considered conscious, not the “self” state or the experience state, or even a combination of both. Such a system, if run, would come to have thoughts about itself and its experiences, since the relevant information contributes to the thought formation process. Those thoughts in turn would have an influence on a more persistent record of information (the “self” state), which then in turn colors future experiences (it can learn, and it can learn about itself). If asked to describe its experience it could, since it is able to make a decision (a kind of thought) to speak, and the words it decides to speak can indeed describe its own experience and self, since that information lies in the causal past of the decision. I would call such a system conscious; if we were to doubt that such as system is conscious it would seem tantamount to doubting that we ourselves were conscious, since we can’t differentiate our own experience, or the reports of other people about their experience, from the experience and reports of such a system.

August 28, 2006

Two Problems Facing Self-Representational Approaches to Consciousness

Filed under: Mind,Self — Peter @ 12:13 am

A recently popular* approach to consciousness is to define a conscious state as a mental state that is directed at, or represents, itself. This, according to the theory, explains the self-awareness that is essential to conscious states, and, in addition, distinguishes them from unconscious mental states. The idea that something can be self-representing may seem contradictory, but as shown by Kenneth Williford in his paper “The Self-Representational Structure of Consciousness” there are solutions to this apparent difficulty, and so I will not count it among the problems.

As attractive as it may sound philosophers such as John Drummond and Rocco Gennaro have shown that there are serious difficulties facing such an account of consciousness. One is that our experience of self-awareness doesn’t seem representational, except in the special case of purposeful introspection. In an experience of looking at an object it feels as though the object is being presented to me. However, my self-awareness doesn’t feel presented, it is simply there, an ever-present part of experience. The second problem is that it seems difficult to handle the case of purposeful introspection under such a theory. Surely introspection is different from our normal mode of experience, but if introspection is consciousness being directed at itself, a common understanding of introspection, what separates it from our normal self-directed representation?

An unmodified understanding of self-representation can solve both these problems. We could argue that our self-representation doesn’t feel like our representation of external features of the world because the representation we are talking about is part of the unconscious structure of experience. The self-representation then may not be consciously experienced as representation even though it is fundamentally structured as such. We could also argue that in the case of introspection more of the state is directed at itself, specifically the parts that we feel as presentational, in line with the previous answer. Or, alternatively, we could argue that introspection doesn’t really direct the currently conscious state at itself, but at a remembered state or some fabricated state, thus explaining the unreliability of introspection in certain cases.

While such answers may suitably address the objections, they may seem unmotivated by experience and theory. Fortunately, there is a better answer. The following version of self-representational consciousness is borrowed in large part from Uriah Kriegel: We break each conscious state into two parts, which I will call the monitoring aspect and the content aspect. The monitoring aspect is directed at, or about, the content aspect. Additionally, the monitoring aspect and the content aspect together form a “complex”, and thus a state that we can call conscious. The nature of being a complex is key to this account. In a complex both parts are influenced by each other, such that neither part would be the same if the other was missing. One way to understand this is that they each contain information about each other, information that is generated by the other part, but not necessarily representational information. For example two parts of a broken plate with matching edges could be considered a complex, because with each edge we could deduce information about the other piece, and neither would have had the edge that it does without the other piece having its edge.

Now, allow me to make some original modifications to this account. I would argue that the content aspect forms a complex by being colored by information from the monitoring aspect about the self. This then is the feeling of self-awareness that infuses all of our experiences; it is what allows that experience to “fit” into the conscious “complex”. The monitoring aspect then could be identified with our “self”, besides representing the content aspect it contains information about who we are. We could even use this description to explain the continuity of the self over time, specifically that the monitoring aspect and the content aspect of consciousness aren’t transient, being created simply for a moment of experience, but that they develop over time together. As our experience of the external world changes the content aspect changes to reflect this, and the monitoring aspect of course changes its representation of the content aspect as well. Likewise as the monitoring aspect (the “self”) changes the information about it in the content aspect changes.

This account has the added benefit (like most self-representational approaches) of explaining why our self-awareness can’t be mistaken, because if the monitoring aspect represented the content aspect incorrectly, or the content aspect contained incorrect information about the monitoring aspect, then together they wouldn’t form the appropriate complex, and thus wouldn’t be conscious.

This account answers the first objection, that self-awareness doesn’t feel like representation, by identifying our feelings of self-awareness that are part of each experience with the information about the monitoring aspect contained within the content aspect. Since this information isn’t representational by nature it is no surprise that it doesn’t feel like self-representation. Handling introspection is a trickier case. I would divide introspection into two categories, good and bad introspection. “Good” introspection is deducing information about the self from the information that is part of the self-awareness of a current experience or a remembered experience, and surely this doesn’t pose any problems for the account, since have the content aspect be directed at a memory or its own self-awareness is different from the usual self-representation. As for “bad” introspection, reflection upon normally unconscious aspects of the mind, I would say that it is a case of the content aspect being directed at a fictional account, which explains why such introspection can be so unreliable.

* I say recently popular because there are examples of philosophers from as far back as the 1920s who have put forward ideas similar to these (specifically Brentano).

August 26, 2006

How to Defeat Internal-World Skepticism

Filed under: Mind,Self — Peter @ 2:11 pm

Is it possible to be a skeptic about your own conscious states? Specifically, is it possible to rationally believe that you are not having the conscious thoughts that you think you are having? The goal then is to argue that the mind words in such a way that it is not rational to have such doubts, or to show why we have such a hard time entertaining such doubts if they are reasonable. In their paper “Internal-World Skepticism and the Self-Presentational Nature of Phenomenal Consciousness” Horgam, Tienson, and Graham document three failed solutions to the problem: access consciousness, the view that the we have non-inferentially formed beliefs about own current-occident mental state (COMS); ontological consistency, the view that the first order mental state is part of the COMS belief, which is directed at itself; and ontological consistency plus a long armed functional role, the view that the COMS belief is also defined by is causal role within the mind. Unfortunately all of these proposed solutions can be defeated in the same way, because it is possible under all of them that we simply think, incorrectly, that we have such beliefs. For example under access consciousness it is possible that the COMS beliefs are simply systematically in error, in the case of ontological consistency and ontological consistency plus a long armed functional role it is possible to conceive of COMS that play the same role, but whose contents fail to align in any significant way with the actual workings of the mind.

Unfortunately the solution provided by the aforementioned paper has its own problems. The authors propose that our phenomenal experience is self-presenting, and thus that to have the experience is to know we are having that experience, and that it is self-presenting. This view however suffers from some of the same problems as the previously mentioned failures, specifically is possible that we simply think, incorrectly, that we have such self-presenting experiences, or that the content of those self-presenting experiences is radically in error.

Before I go on to discuss my own solution let me first mention that it is important that we not prove too much with regards to introspection. For example there is a well-known tendency for people to prefer the rightmost object in a lineup. Most people are ignorant of this unconscious preference, if you ask them why they picked the rightmost object they will give you many other reasons why they chose it, but they have no idea that there was an unconscious bias, even though that bias did contribute to their decision to pick the object on the right. Thus we don’t want to insist the facts that our conscious self-awareness reveals match up perfectly with the working of the mind, we want to say that they match up with the content of consciousness. The reasons described by the person in the experiment are their real conscious reasons, even if there were unconscious influences on their decision or reasoning, or at least this is what we would like to conclude.

One obvious way out of this problem is to define consciousness as simply the subject of our self-awareness. This certainly eliminates the problem of skepticism, but unfortunately this solution suffers from its own set of problems. Specifically our self-awareness is itself conscious, and thus we would have to conclude that our self-awareness was a subject of itself. However it is impossible that conscious self-awareness could contain all the information about itself (in order to be conscious) as well as information about other contents of consciousness (since this would imply that it could have infinite informational capacity, among other problematic conclusions). The way to overcome the problem of self-representation is to allow that some details are omitted by the self-representing mechanism, and that such details can be filled in when needed (an idea stolen from “The Self-Representational Structure of Consciousness” by Kenneth Williford). Such a resolution will not work for this solution though, because if some details were left out by the self-representing mechanism those details wouldn’t be conscious, and we would be right back where we started.

What I consider to be the real solution turns on a property of thought, that thinking of a complex statement or idea implies that you are thinking about its constituent parts. For example if you are thinking “there are no pink elephants” you are thinking about “pink elephants” and thus in turn “pink” and “elephants”. This seems reasonable to me, and I can’t think of a reason to deny it, so I will simply present it here without a defense for the moment. If we accept this, the solution is fairly obvious, as the possible skeptical defeaters mentioned earlier are all of the form “you don’t really have the conscious thought ‘X’, you simply think that you have conscious thought ‘X’”. However, if we accept the principle I introduced earlier then it is reasonable to conclude: if I think that I am thinking ‘X’ I must also be thinking ‘X’, and thus ‘X’ is a real conscious thought after all. The only possible remaining skeptical defeater is of the form “how do you know that your consciousness corresponds to anything real?” We can use an argument against epiphenomenalism to defeat that possibility; as such an argument demonstrates that our consciousness has real causal powers. You can read my argument against epiphenomenalism here.

August 3, 2006

Being Risk Adverse

Filed under: Ethics,General Philosophy,Self — Peter @ 12:38 am

As I mentioned last time it is often thought that we should use the expected value of our actions to determine if they are rational. Specifically we weigh each outcome of an action by its likelihood, and if the total is positive we consider the action to be a good choice. However, there are some cases where the expected value does not seem to reflect the actual decision making process (as judged by the choices made). For example if there is a high probability of some small reward and a very low probability of a great loss many people will consider this an bad choice to make, even if the expected value is positive. Similarly, when faced with two bets, one which has a high possibility of a small reward and the other which has a very low possibility of a large reward people will often choose the bet that is “more of a sure thing” even when the other bet has a greater expected value. Are these choices rational, and thus reveal that expected value doesn’t fully capture rational action, or is being risk adverse irrational?

One possible solution is to say that we should weight each outcome not by its probability but by some other factor, say its probability squared, which reduces the weight of unlikely outcomes. This would explain why people often prefer more of a “sure thing”, but then this makes the case where people avoid the very low possibly of a large loss seem even less of a rational decision. Perhaps then we can explain our behavior by assuming that people mentally discount favorable outcomes and overemphasize negative ones. We might even be able to tell an evolutionary story about this, arguing that a group whose members take the riskier choices will have a smaller outcome due to the occasional unlucky result, although the remaining members will be well off. On the other hand a group that “plays it safe” will have more of its members survive, even though on average they will be worse off than the other group, and thus in the long run will come to dominate the gene pool. This definitely provides a concrete explanation for why people are risk adverse, and to different degrees, but it also effectively denies that there is any rationality in the individual’s choice to be risk adverse, simply an innate psychological drive.

I don’t think that we should so quick to abandon rationality though. Let us set aside expected value for the moment and consider how we justify our choices in general. It has been proposed that our choices on matters ethical are guided by a desire to be able to defend our actions to other people (specifically by Scanlon). A slight modification to this idea is that we seek to act in ways that we can justify to ourselves (echoes of Korsgaard), specifically to our future selves. There is not just one future self to consider however, if an action has multiple possible outcomes we seek to be able to defend our choice to all of our future selves, equally. This is why that many of us won’t be willing to take a gamble that has the possibility of a large loss, even if there is a high probability of a small reward, because we can’t justify our choice to the future self that loses. We can justify some gambles however, especially when the loss is comparable to the rewards, or the loss is negligible, and the degree to which we feel that we can justify our gambles determines how risk adverse our choices are. Expected value then is what we use to choose between the risks that we can justify to our future selves, but considerations of justification come first, and thus explains why not all choices seem to be guided by expected value.

I admit that in some ways this might be seen as a radical stance to take about human behavior, that it is motivated more fundamentally by a desire to act in ways that can be defended, if only to ourselves, than by what will, on average, result in the most rewards. It is true that the actions of some people might seem to be motivated solely by self-interest, but I think that for these people the principle of self-interest is the only way that they can defend their actions to themselves, and that not all people are motivated in this way.

« Previous PageNext Page »

Blog at WordPress.com.