On Philosophy

December 31, 2006

Postcard

Filed under: General Philosophy — Peter @ 12:00 am

Initially this was just a joke, until I realized that people are doing basically the same thing with time capsules. Question of the day: does the idea of the past communicating with the future (in a one way dialog) make sense, or does it rest on a fundamental misconception of time?

Advertisements

December 30, 2006

The Study of Ethical Intuitions

Filed under: Ethics,Metaphilosophy — Peter @ 5:39 pm

Some approach ethics by appealing to our ethical intuitions; we are asked to accept or reject a position based on how well it agrees (or disagrees) with our intuitions. If this is used simply to motivate an investigation into some area perhaps it doesn’t do too much harm. But, if these ethical intuitions are seriously considered as the basis of ethics, then it leads inevitably to some kind of relativism, from the simple fact that people don’t all share the same ethical intuitions. Although people’s intuitions may be fairly homogeneous in a single culture there are no such guarantees in general (for example, at one time many thought slavery was acceptable, while now we know it to be reprehensible).

Of course few who think that our ethical intuitions are a suitable basis for ethics welcome relativism, because in many ways relativism simply indicates that one has made a mistake (when looking for some set of ethical facts to become a relativist is to admit: “I couldn’t find any”). One common defense against relativism is to appeal to some set of core intuitions, which are in principle shared by everyone, as the basis for ethics. If such a core set of intuitions did exist it might form a suitable basis for ethics, and it certainly wouldn’t be relativism. But there is no reason to believe that a core set of intuitions does in fact exist. Even if all human beings shared them there is no guarantee that all rational agents everywhere (aliens, ect) would have them too, and ethics is supposed to apply to them as well. A second problem is that not even all people share a core set of ethical intuitions, there are always a few individuals with an abnormal psychology who won’t have them, no matter what you define them to be. Again, ethics is supposed to cover these people as well, even if they don’t accept it. And if we think that it doesn’t we are back to being relativists. Another option for avoiding relativism, less often used, is to say that ethics is determined by the intuitions of the majority. This fails for three reasons. First it is unmotivated, except by a desire to avoid relativism. Secondly we can’t know what this majority opinion is, since there may be rational agents that we are unaware of. And finally the majority opinion changes over time, meaning that we are still left with a kind of temporal relativism (and we can’t appeal to a majority of all people at all times, as we have no idea what the opinions of future people will be).

But the fact that ethics conducted by studying our ethical intuitions fails to give us what we want from ethics shouldn’t be surprising, since it isn’t really ethics at all. Instead I would contend that conclusions motivated by our ethical intuitions are really a kind of psychology (a very confused kind), as our intuitions are part of our psychological makeup, and thus to study them is to study our psychology. Certainly we can easily transform the conclusions arrived at by such a approach into obviously psychological ones by simply predicating “people think that” before them, and a system of ethics derived from our intuitions would have to be understood as a hypothesis about the unconscious basis of them. And the fact that we are really doing psychology in such an approach explains the problems with relativism highlighted above, because much of psychology really is relative to specific cultures. And, even though there are some psychological characteristics that the vast majority of people from all cultures share, there will always be a few abnormal individuals. This is to be expected in a psychological study, and the fact that what we thought of as ethics demonstrated these features is all the more reason to think of it as a kind of psychology.

I would claim that any philosophical endeavor based in some way on our intuitions is really psychology (ex: knowledge, value), although few fields seem to rely as heavily on them as ethics does, and thus made the best example. And not every philosophical position derived from intuitions must be abandoned; the reasons behind the intuitions might be picked out and defended, allowing the position to be based on those reasons instead.

December 29, 2006

Natural Interpretations

Filed under: Mind — Peter @ 4:12 pm

Functionalism holds that certain kinds of processes are conscious, and that all there is to consciousness is that process (meaning that the conscious experience of a particular individual at a particular moment can be identified in some sense with the physical realization of that process). Some people deny functionalism without giving it serious consideration because they are strongly committed to either free will (of the kind that isn’t compatible with determinism) or the idea of a soul, possibilities that functionalism rules out. I don’t know if it is possible to convince people to give up those kinds of beliefs, and so arguing for functionalism (or even materialism) to them might be impossible. However, these are not the only things that motivate people to reject functionalism. Some may simply want more details, a fully developed theory about what kinds of processes are conscious, before they feel comfortable endorsing the idea. This is the job for more detailed theories, and not my concern here. Another reason for some to reject functionalism is that it holds that anything which can support a complex process, such as computers, could potentially be conscious, and some simply don’t see how data, a sequence of binary digits, could be conscious as we are.

Of course when we look at the human brain we may be equally skeptical. How can a bunch of neurons be conscious? When we look closely at both the human brain and the theoretical conscious computer neither one seems made of parts that simply must be conscious. Certainly this isn’t a fault with functionalism, which says that consciousness only comes about when all the parts are put together in the right sort of whole, but how are we to assure our selves that this is possible? Well how do I know that I am conscious? Because I experience myself as an experiencer. By this I am trying to say that I, as I live, experience not only the world, but experience myself as the subject. When I walk down the street I am aware of not only the street but, if I choose to focus on it, that I am experiencing the street. I think we can extend this to cover the general case; I think it is acceptable to say that if some system experiences itself as an experiencer then it is conscious.

Although better than nothing, this criterion for determining if a system is conscious has its problems. For one it might seem to be circular, certainly only conscious subjects can have experiences, and thus it is no good to us in looking for consciousness, because in order for us to agree that indeed some process is experiencing itself as an experiencer we must have already agreed that it is conscious. We might then modify our criterion by simply substituting a less loaded word for “experiences”, such as “thinks” or “interprets”, but these aren’t substantial improvements, for one it is hard to say when something is thinking or interpreting, and some might argue that they too can only be done by conscious subjects. Let us then approach this problem from a different direction, by considering a simple device with a sensor and a light. When the sensor is over a dark color the light remains off, and when it is over a light color the light turns on. In a very loose sense then the light indicates that the device “thinks” (for lack of a better word) that it is over a light color. Now some might argue that the light itself has no meaning, and that the meaning is in us, that we see the light as meaning something and project that knowledge into the device. But if that is the case consider our light / dark detector and compare it to different devices, one that lights up randomly, one that lights up for the color blue, and one that is always lit. If we believed that the light itself had no meaning then we would have to consider the light being on in each of these devices as essentially the same. But clearly this isn’t the case. The light in each of these devices has what I call a natural interpretation. In the case of two of the devices the light is meaningless, but in one it indicates light colors, and in one it indicates the color blue. I say that this is a natural interpretation because there are facts about the construction of the devices that determine that the lights will come on under certain circumstances, and these facts are independent of us. Now we might be tempted to pass this off as simply correlation, specifically that the light is correlated with certain conditions, but this doesn’t quite capture the idea. For example, one of our devices could be malfunctioning, say a wire has become loose inside it. It is still the case that there are facts about the device, specifically its normal condition, that present the light as indicating something, and thus even in a malfunctioning device the light may still indicate a condition even when that condition isn’t present. This is why it seems natural to use the word “thinks” to describe the device, because in the case of the malfunctioning device it may “think” that there is a light color, but it is “mistaken”. So we can make our criterion for consciousness: if a system “thinks” of itself as an experiencer then it is conscious, using the definition of “thinks” developed here.

Now let us turn to the human brain, something that we can be confident is conscious. Can the human brain meet the criterion we have developed? First we need to develop an idea as to how brain states can be naturally interpreted to be about anything, since the brain is not a simple system. Let’s consider a simple mental concept, such as “dog”. When the concept “dog” is part of my consciousness I assume there is some characteristic pattern of activation or signal, but how, as an outside observer, are we to know that this pattern corresponds to “dog”? Well there are certain facts about the operation of the brain that allow us to know that a specific pattern of activation is most likely to trigger certain other characteristic activities. In the case of “dog” these would probably be mental images of dogs, the word “dog”, facts about dogs, ect. Now there are also facts about the brain that tell us what kinds of activity are likely to result from certain sights and sounds. Thus we can connect the two and determine if any of the characteristic activities likely to be triggered by our pattern of activation under investigation correspond to some kind of sight our sound. And if they don’t we can see what they are likely to trigger and determine what sights and sounds those are. If the pattern we are investigating is really the “dog” concept then the associated sights and sounds will likely be associated with dogs, and the associated concepts will also likely be connected to dogs in some way. Thus looking at a certain pattern we could say that there is a natural interpretation of that pattern as being the concept “dog”.

Obviously it is harder to determine a natural interpretation for a concept like “self”, but I think it can be done. You might start by determining if there is a common concept triggered when the system happens to look at itself (because in people this triggers the idea that I am looking at “my” hand). From there you would look for an associated concept that was likely to bring up information about the current state of that system (for example, when you think about yourself if you are hurt, or tired, or hungry it often springs to your attention). If we can find something like this it would certainly be natural to interpret it as the concept of self. To determine if this self is “thought” of by the system as an experiencer takes an extra step. I would propose seeing if this self concept can trigger any memories, which would seem to indicate that the system thinks of itself as having experienced those events. The best part about this process, however, is that if it were to be carried out on the human brain it seems likely that it would conclude that the brain is conscious, as it should. Now I can’t say that it would definitely succeed because we don’t know enough about how the brain works; but given what we do know, and our own experience of being conscious, it seems likely. And we could carry this process out on a computer as well, given that we have access to both the states the computer is in and the instructions that govern how it moves from one state to another. And thus a computer, constructed properly, could very well be conscious, or at least we could be as sure of the computer being conscious as we are sure that other people are conscious.

December 28, 2006

An Ethical Spectrum

Filed under: Ethics — Peter @ 3:48 am

I know, the title on this image is redundant, but in my defense I may want to use it again.

In the above diagram I have graphically represented one way of looking at ethical theories, based on where in the causal chain of events they base their judgments. On one end we have the “core personality”, a persons most fundamental, and formative, attitudes and motivations. Ultimately these can be said to be the cause of everything a person does voluntarily (or at least one of the causes). On the other side we have the long-term results, and the middle we have the moment of choice. The choice is an interesting point, because it divides the internal from the external. Leading up to the choice we have mental events, and after the choice we have external events. If I had to say which one the choice itself is I would say internal, but the division between what is external and internal is a slippery subject. The choice itself is the commitment to action (which includes the sending of impulses to initiate motion), not merely the mental decision to do something (which would fall under immediate intentions on the spectrum), just to be clear.

As you can see I have attempted to show which general areas correspond to what kinds of ethical theories. Since what exactly ethical judgments depend on varies from theory to theory, even when theories belong to the same class I have picked out a range instead of a single point. The area of the chart I have unhelpfully labeled as “Other” has no “big name” ethical viewpoint associated with it (or at least none springs to mind), but I think the common sense ethics that many people subscribe to often falls in this area. Such ethical systems judge people by what they meant to do, and tend to be forgiving of those who mean to do the right thing, but are unable to make good choices (say because of drug problems, ect). Another possibly confusing part of this diagram is the distinction between early and late consequentialism, terms that are my own invention, and designed to point out a key difference between two varieties of consequentialism. Late consequentialism is what consequentialism is usually though of, as judging people based on the actual results of their actions. In contrast early consequentialism makes its judgments based on what normally would be the results of the person’s choices.

I admit that the ethical theory I like best is a kind of early consequentialism, so in some ways this diagram may be designed to be a subtle argument for it. Specifically such theories make their judgments based on a unique point in our spectrum, the one that separates the internal from the external, and so, unlike other ethical theories, early consequentialism coincides with an area on this chart that is already distinguished, and thus, in my opinion, seems more elegant. Of course this isn’t a serious argument for early consequentialism, just because the theory is elegant in some ways doesn’t mean that the world must work that way. However such elegance may be persuasive enough to convince some to give early consequentialist theories a closer look.

December 27, 2006

Ethics and Sociopaths

Filed under: Ethics — Peter @ 5:03 am

Sociopaths were one thought of as simply people without feelings. However things are never that simple, what was once thought of as simply a lack of feeling is now recognized to be often correlated with antisocial behavior. As such the term sociopath no longer officially refers to any psychiatric condition, people who were once called sociopaths would most likely be diagnosed as suffering from psychopathy or antisocial personality disorder. And besides, the mind is rarely so simple. Even so, here I will misuse the term sociopath to describe a kind of “ideal” case, of a person with no emotions (no further strings attached), in order to do some meta-ethics.

Specifically I would ask: could our fictional sociopaths be ethical? Now I am not asking if they would in practice act ethically. In principle there is no reason that they couldn’t act in any way they chose, but for many people their emotional aversion to doing wrong, especially to harming others, is one of the primary forces that keeps them on the straight and narrow. A sociopath would of course not be motivated to do wrong by rage, or a misguided desire to be happy, so perhaps they might have some advantages on the rest of us too. But, in any case, the question I am asking here is designed to discover if there are any reasons derived from our ethical systems that would rule out sociopaths from being considered as upstanding people, in principle, regardless of how they act.

Obviously there are no such barriers inherent in a consequentialist system. A “late” consequentialism would judge them only on the results they achieve, and sociopaths can achieve the same results as the rest of us; likewise an “early” consequentialism would judge them based only on the choices they make, and again they can make the same choices for us. And a deontological ethics would have no special reservations about them either, being close in spirit to consequentialism. However, there are systems of ethics that might prevent sociopaths from being considered ethically upstanding, no matter how they chose to act. For example virtue ethics, which says that the virtuous person (defined as person who has certain qualities) can be ethical, would probably say that no sociopath could be considered morally good, because often what makes a virtuous person includes something like love for one’s fellow man (actual language may vary), and a sociopath, having no feelings, can’t meet this standard. Similarly an ethical system that judges people based on their intentions may rule out sociopaths, especially if it cares about reasons for action, because sociopaths will often be motivated by cool calculation (perhaps not to do something because it is “wrong”), and not by the caring for others that may be required.

But consider a world full of sociopaths acting ethically (i.e. doing whatever would be considered ethical by a normal person, even if the system of ethics that we are considering prevents them from being considered truely ethical). Obviously this world won’t differ (in the large scale) from a world of people with normal feelings acting in basically the same way. But if this is the case why should we prefer the world with normal people over the world full of sociopaths (assuming we were judging the value of these worlds, not picking one to be born in)? I submit that the only reason to favor one over the other (ethically) would have to be derived solely from the ethical theory itself, as there are no substantial differences that would help us pick one world over the other. And I think this shows that an ethical theory that rules out sociopaths in principle from being considered ethical cannot be rationally defended (we have identified at least one case in which there are no relevant objective criteria that can help us pick one world as better, and thus the judgment of the theory that the world of sociopaths is ethically inferior to the world of normal people must not be based on such factors, meaning that they theory is making an irrational or unjustified distinction). Of course we could be stubborn and argue that the existence of emotions in one world is an objective reason to consider it superior, but I find that hard to say with a straight face, since there are as many negative emotions as there are positive ones.

Next Page »

Blog at WordPress.com.