On Philosophy

July 17, 2006

Even More on Intentionality

Filed under: Intentionality — Peter @ 1:20 pm

This is just a quick response to an objection to my earlier post on intentionality I found here. The objection can be summarized as follows: that in my “thought experiment” I have already assumed the conclusion that thoughts are localized in the brain. I argue that this objection is invalid because I am not discussing a thought experiment at all but a real experiment that I actually carried out in order to test the nature of intentionality, I didn’t assume a conclusion, I discovered it by experiment.

Perhaps though the best way to defend my conclusions is to explain the logic behind them more thoroughly. My reasoning was as follows: if we assume that externalism (or non-causal intentionality) is correct it some follow that the contents of our thoughts depends (non-causally) on the contents of the world. From that statement it follows that either the subjective experience of our thoughts, or our actions (specifically speech), depend on the contents of the world, since these are the only possible ways that we can make sense of the idea of “contents of thoughts” or simply have access to those contents in our lives. Thus when we run the experiment with the bubble we expect that if externalism is true there will be some difference in the contents of our thoughts when the bubble remains intact and when it pops, even if we aren’t observing the bubble (since the connection is non-causal). Since the contents of my thoughts was independent of the existence of the unobserved bubble, as far as either I or my partner could tell, either subjectively or based on how I was able to express myself about the bubble, we must conclude that one of the assumptions that lead us to this conclusion was false.

One possible defense for the externalist is to argue that the content of our thoughts doesn’t influence either the experience of thinking or the public expression of our thoughts. If this is the case it seems that the content of our thoughts doesn’t have any real influence on our thinking at all, and thus seems silly to say that it is somehow part of our minds. I could understand a position where the contents of the world were seen to have a (non-causal) impact on the objective truth of our thoughts, but the objective truth of our thoughts isn’t a factor that seems to be part of our minds (it would be much harder for people to make mistakes if objective truth was part of peoples’ minds, since they would be able to tap into their “truth sense” to correct their errors).

One could also argue that the content of thoughts influences our thinking only some of the time, or only for certain people. I will reject these responses out of hand, simply because they don’t provide explanations of why or how this could be the case. Let us make them more concrete then by insisting that the intentional relation influences our thinking only when we directly perceive the objects we are thinking about. This claim however is essentially a more limited form of the causal version of intentionality, and has the same failings.

I look forward to hearing more defenses of externalism, since it seems to fly in the face of all the experimental evidence we have regarding the relation between the brain, the mind, and our subjective experience of consciousness.

Which Actions Are Good?

Filed under: Ethics — Peter @ 12:58 am

Determining when an action is good is not an easy task, and this post will not attempt to explain how to classify every action as good, bad, or morally indifferent, simply to show why certain kinds of actions can’t be good.

There are some philosophers, consequentialists, who define an action as good if it has good consequences. Although this idea has the benefit of being simple it conflicts with our moral intuitions. For example consider the following famous situation: there is a hospital where there are five dying patients, each of which needs a different organ transplant to live. So the doctor kills one of his (or her) healthy patients and distributes the organs to the five dying patients, who are saved. I think most of us would judge the doctor’s action as bad, and to accept consequentialism seems to be to agree that the doctor’s action was in fact good. One way that consequentialists may attempt to defend themselves from this example is by pointing out that having the healthy patient be killed would cause people to stop going to the hospital (and that these undesirable consequences make the action bad after all). Unfortunately for consequentialism this response is flawed, because it doesn’t change our moral intuitions at all if we assume that the doctor was able to kill the healthy patient in an undetectable way (perhaps making it look like an accident), and now even these consequentialists seem forced to say that the doctor’s action was good.

Another way for consequentialists to approach this scenario is to assert that the doctor’s actions need to be judged based on their immediate consequences, not based on the final results. Thus we now might be able to say that the doctors actions were wrong because killing the one healthy person was wrong even if saving the five dying ones was right. If we accept this defense then consequentialism becomes open to another objection, specifically that if a person’s actions accidentally result in harm to someone else the consequentialist view seems to still classify their accidental action as wrong. However, I think most of us would agree that an action resulting in harm accidentally is morally indifferent, assuming that the accident was not caused by negligence. The only response that the consequentialist can make to this is to add the additional condition that a person’s action is wrong if the results are bad and that those consequences were the intended ones. However, to adopt such a position is to abandon pure consequentialism, and this is all that is required for the remainder of the argument to proceed.

So for an action to be morally good we accept that there are certain motivation or intentions that must be present in the person acting, and ones that must be absent as well. I propose that self-interest must not be a motivation for an action if that action is to be considered good. This is not to say that all actions having self-interest as a motivation are necessarily bad, for example if I eat because I am hungry I am not committing a sin, simply that such acts will be at best morally indifferent. Nor am I saying that the absence of self-interest guarantees an action to be good; it is a necessary condition not a sufficient one. Unfortunately I cannot provide a universal proof of this statement, but there are two things in its favor: it agrees with our intuitions (specifically that selfishness/greed is to be avoided), and it follows as a consequence from a great many theories about ethics.

This is also not to say that self-interest can’t play a role in forming the principles that do motivate us to perform good acts. For example it is likely that when your friends are happy you are happy as well, so you adopt the principle that “I should do things that make my friends happy”, and your reasons for adopting this principle may be motivated by self-interest. Now if when you give a friend a gift there are two possible motivations. If you reason thus: this action will make my friend happy and hence me happy, then your action is not morally good. However if you simply reason: this will make my friend happy, then your action has at least the potential to be morally good. This ties in closely with how moral behavior is learned: as children people are motivated solely by punishments and rewards, and hence aren’t acting in a morally good manner; later when the punishments and rewards are removed they find reasons to continue acting morally on their own, and become truly good people.

This leads me to my final point, which is that the constant threat of punishment for bad behavior and rewards for good behavior greatly reduces the likelihood that a person will act in a morally good manner. (The movie “Clockwork Orange” also makes a similar point.) This is because the knowledge that they will be punished or rewarded for their actions will constantly be floating in the back of their minds, and hence it is likely they will be at least partially motivated by it. This explains the following observation: religious people, despite being more motivated than the non-religious to do the right thing, actually act no better or worse on average than their non-religious counterparts. How does our analysis above explain this? Well since the religious constantly have to worry about punishments or rewards from their deity they are less likely to be acting in a morally good manner, and hence are more likely to give into temptation (since they have developed fewer reasons to do the right thing), especially when they know that they can be later forgiven for their mistakes. Now I am not saying that the religious can never do the right thing, simply that it is only possible after they have learned to do the right thing because it is the right thing, and not because of the rules set out by their deity.

Create a free website or blog at WordPress.com.