On Philosophy

July 26, 2007

Too Much Equality

Filed under: Society — Peter @ 12:00 am

Unjust inequalities are of course undesirable, but not every inequality is unjust, and excessive equality has its downsides as well. Not all people are the same, and the differences between people make some of them better than others. Trying to ignore that fact results in a society that ends up discouraging people from fulfilling their potential. There is an analogy here to the artificial economic equality enforced under communism. Artificial economic equality treats everyone as though they were equally productive. And this has the effect of discouraging anyone from being productive, since there is no incentive. Additionally, by failing to reward productivity, communism also has the effect of leading people to forget that productivity even matters. Similarly artificially treating everyone as equal discourages people from being more than average, and by not celebrating greatness we are led to forget that it even matters.

I think there are primarily two reasons why modern societies seem intent on perpetuating the myth that everyone is equal (at least in intrinsic value if not in purchasing power). The first is that recently most societies have undergone a process of self-improvement in which a number of unjust inequalities have been eliminated. Because of the difficulty in eradicating these unjust inequalities we are left with the suspicion that any inequality is an unjust inequality. I grant that it is appropriate to be suspicious of inequalities between people, because they can do a great deal of harm. However, it is relatively easy to separate most unjust inequalities from justified ones. An unjust inequality differentiates people based on what they are, while a just (or at least not unjust) inequality is based on who people are. Differentiating people based on what they are means differentiating them based on some external property, such as who their parents were, which is of course the source of most inequalities since that is the easiest way to separate people into different groups. In contrast differentiating people based on who they are means differentiating them based on internal properties, properties of their personality or character (of course we only have access to those properties by observing external properties, like their behavior, but all this means is that our judgments are fallible).

The second reason that the myth of equality is perpetuated is that it is flattering to the ego. It is nice to believe that there is no one who is better than us, because it makes us feel big. Even when someone is shown to be better than us at some specific task we can then comfort ourselves by appealing to the idea that we are equal, and thus that there must be some way in which we are better than them. Of course this is fallacious reasoning, but it is psychologically appealing nonetheless. Now I’m not trying to accuse people of purposefully promoting certain ideas about equality just to make themselves feel better, but I am claiming that when those ideas are introduced, for whatever reason, this fact biases many people unconsciously in their favor, making such equality seem better than it really is.

As I already mentioned one effect of this artificial equality is that people are discouraged from striving to be better (or even from realizing that they should strive to be better), and thus we end up with a society of mostly average and unremarkable people. Depending on your attitude towards humanity that may or may not be a bad thing. If you are of a more classical persuasion you might feel that personal excellence and nobility are the whole point, and thus that a society that doesn’t encourage those characteristics is a disaster. But if you are of a more modern bent then you may think that all that matters is pleasure, in which case individual achievement doesn’t matter. But whether people are led to excel or not is not the only consequence of treating everyone as equal. Another effect is that it makes people a poor judge of others, which can be disastrous if your political system involves electing officials. If everyone is equal then who you prefer is basically a matter of personal preference, and you are free to make your decision based on whatever criteria you like. Which means that when it comes to electing officials someone who is objectively better (by virtue of being more competent) may be passed over for someone who is more likable. Of course just permitting ourselves to consider some people as better than others wouldn’t automatically make people better voters, but it would allow candidates to actually compare their personal strengths to each other (in addition to the issues) without sounding pretentious (as it stands now no candidate can directly claim that they make better decisions, are more responsible, or are more intelligent without falling out of voter favor by violating the assumption that everyone is equal).

July 25, 2007


Filed under: Language — Peter @ 12:00 am

Not every interaction between people constitutes language. For example, splashing water on someone is not in normal circumstances a linguistic act. Language (envisioned as communication) requires, at a minimum, a speaker who is intentionally directed at some state of affairs and an interaction between the speaker and listener that is designed to intentionally direct the listener to the same state of affairs. Being splashed with water isn’t linguistic then because while it does have the tendency to direct the person being splashed to the fact that they are being splashed it is not the case that the person doing the splashing was thinking about that fact before they splashed them with the idea of communicating it.

But while this is a reasonable description of most language (meaning the use of words) it seems to fail to properly capture imperatives (and hence questions as well, since every question can be seen as an impetrative of the form “tell me the answer to: …”). It’s not clear that when an imperative is uttered that there is a shared intention, at least in any straightforward manner. It is true that the speaker is probably intentionally directed at the idea of the listener doing something, but it is not their purpose to get the listener to entertain that idea at all, they just want them to do it. Imperatives then may seem better understood as a way for one person to be controlled by another, not as communication. We could think of imperatives like a kind of linguistic remote control we have for other people. We press buttons on the remote by uttering certain sentences, and by pressing those buttons we trigger certain behaviors in other people. And if this is how imperatives should be understood then clearly they aren’t communication, any more than operating your TV is communicating with it.

This model captures the overall structure of how imperatives operate, but it sweeps under the rug the ability of people to decide for themselves whether to obey that imperative or not. People then are more like broken TVs, which don’t always obey the remote. But unlike a broken TV the choice about whether to obey an imperative is not made randomly or unconsciously. In response to imperatives people think about the choice and then choose whether to do as asked or not. And that gives us a common pattern of intentional directedness on the part of the listeners, namely that they all become intentionally directed at the choice between whether to do as asked or not to do as asked. Of course for this to count as communication the speaker must also be intentionally directed at that choice, but it is reasonable to suppose that the speaker is in fact so directed. Since the speaker realizes that the listener is in fact an intelligent being and not a TV they must also realize that potentially the listener may not do as asked. And so they must devote at least some thought to the choice they are presenting to the listener, to predict whether they will obey or not.

And if this is true then imperatives are a form of language, properly speaking, and not some form of remote control. This may seem like the intuitive result, but actually it is quite unintuitive given some modern approaches to language. Language, as I have defined it here, is simply communication. And many recent thinkers hold that our use of words often performs other functions besides that of communication, and, more importantly, that it can have one of these functions while failing to serve as means of communication. Imperatives might be thought of as an example of this because, as I mentioned above, we could think of them as a form of control and not of communication. In any case, we can test this hypothesis by seeing if we can in fact replace imperative expressions by equivalents that have the same effect but are clearly examples of communication. Given the considerations here I think the proper replacement for “do X” is “you have a choice between doing X or doing ~X, and I the speaker want you to do X”. Obviously that is a bit long winded, but I think it could serve as a replacement, thus showing that we don’t need imperatives (showing that imperatives don’t do anything above and beyond normal communication). Now at this point some may wonder why I didn’t just shorten my replacement to “I the speaker want you to do X”. While that is something that is also communicated with each imperative I think that by itself does not capture the complete meaning of the imperative. The fact that someone wants us to do something by itself may not lead to us acting. It may be something that we cannot simply choose to do (such as “to be a better person”), or it may be the case that we are already satisfying that want and thus don’t need to take action. Communicating that the listener has a choice to make thus also communicates that the listener can and should choose a particular course of action in light of that want (and in fact has to choose once presented with the choice), one that they may not have come to contemplate just because they knew that want. In a sense then communicating that the speaker wants something is simply too general, our imperative “translations” are thus included in the set of sentences that express the desires of the speaker, but not every sentence that expresses the speaker’s desire corresponds to some imperative.

July 24, 2007


Filed under: Language — Peter @ 12:00 am

You might be curious as to what kind of thing a promise is. But that of course would be to grossly misunderstand promises. A promise is not a thing but a claim, specifically a claim about a claim regarding a person’s future behavior. The promise says that this claim about future behavior is assured to be true, so that while we normally assume that any predictions about the future are unreliable we shouldn’t in this case. The promise then serves as a kind of meta-linguistic tool, which can be used to overcome certain dilemmas involving collective action.

Many have said that you shouldn’t break your promises because it is ethically wrong. I’m not exactly sure how sound that claim is; certainly to mislead people intentionally by making a promise known in advance to be false is probably unethical, but that is true for intentionally misleading people in general. And if the promise is broken without a malicious intent then all it seems to do is make us bad predictors of our own behavior, not unethical. For example, if I incorrectly predict tomorrow’s weather then I am not a bad person. Of course a promise might be treated as more than just a claim, we could choose to see it as an implicit contract between two people. But again, unless we accord special privileges to contracts, it doesn’t seem like breaking them is necessarily immoral. Which is not do deny that we have strong intuitions against breaking promises, but these intuitions are not because breaking promises is immoral, but because there are strong practical reasons not to break them (or so I claim).

The practical reason not to break promises is that breaking a promise renders you unable to make more promises. Of course you can still say that you promise, but the people who you are promising to will treat your promise just as a simple prediction of your future behavior, and not one that is especially certain. The reason this is so is because communication rests fundamentally on reliability. A listener takes the speakers words to mean what they do in part because the listener thinks the speaker can reliably use the terms to designate what they are supposed to designate. By breaking a promise you thus indicate that you are unreliable at using “promise” to designate claims about future behavior that are certain, and hence listeners no longer take your use of that word as designating anything at all.

Of course this is not a phenomenon that is unique to promising, it could happen, in theory, with any piece of language. For example, someone could be unreliable at using color words, meaning that they randomly match color words with actual colors. Such a person would thus be effectively unable to communicate about color, because calling the color of something “red” would not be taken by the listener to indicate anything at all. The use of ethical terms is similar. It is quite possible to use ethical terms just to designate what you like and don’t like, and not what is really right or wrong. But if you do that then people simply stop taking your use of them as indicating right and wrong, and only as indicating your preferences. Communication, as I have noted above, is based on trust, the listener trusts the speaker to use words in their usual senses. When that trust is broken communication can’t happen, and that is to the disadvantage of both the listener and the speaker. Of course promises do stand out as being a situation where it is easy for listeners to lose trust in the speaker. It is quite possible to misjudge color on occasion and still be able to communicate about color, but breaking only a few promises leads people to ignore future occurrences of promising. I don’t think, however, that this is because promising is something special, rather it is just that there are fewer mitigating factors. When I misjudge color there are many possible reasons for that, not all of which involve me being a bad judge of color. Thus people may still believe me to be a generally reliable judge of color in spite of a few errors. But since promises are just about our future actions there are far fewer mitigating factors to appeal to; since we are in control of our actions it is hard to explain why we didn’t act as promised (and since we are only supposed to promise when we can be sure, the fact that something made us unable to keep our promise indicates that we couldn’t be as sure as we thought, which indicates that we are bad at judging when we can make promises). Thus even a few broken promises mark a person as an unreliable promiser.

And not only does breaking promises make it nearly impossible to promise in the future (or at least to get people to take your promises as reliable), but it makes people question your motives for promising in the future as well. As mentioned above since we control our actions any promise breaking appears intentional. And if it is intentional then people will suspect that you made the promise in order to achieve some benefit for yourself. Which means that in the future making a promise will lead people to conclude not that you will actually act in a certain way, but that you want them to believe that you will act that way. And such reasoning leads to a general suspicion about motives and about how self-serving you are. Which in turn leads to mistrust, since every action can appear self-serving upon close scrutiny. So the moral here is to keep your promises. Or, if you can’t do that, don’t be caught breaking them. Or, if you can’t do that, have at least an excuse ready that doesn’t involve selfish reasons for breaking the promise.

July 23, 2007

The Will To Excellence

Filed under: The Good Life — Peter @ 12:00 am

Not to be confused with the will to power.*

To have the will to excellence is to strive always to improve in at least some area, and to never be satisfied with “good enough” in that context. It might be possible to have a will to excellence in all areas of life, but I suspect that would make the will to excellence a burden, because no one has the resources to devote to improving at everything. But usually when the will to excellence manifests itself it is restricted to a specific task or discipline, and so here I will simply ignore the possibility of the will to excellence being harmful for that reason. Despite its simplicity I think that the will to excellence is one of the most admirable qualities a person can have.

Not everyone has the will to excellence, in fact I think that most people lack it. Even so the will to excellence is something that I think should be desired, and not just because it is called excellent. In general people with the will to excellence tend to contribute more to society than those without it do. This is because those without it simply do what they need to get by and no more, and thus their motivation to accomplish something is mostly external; they act only because of associated rewards or punishments. In contrast those with the will to excellence have an internal motivation, and so they are led to attempt things which society can’t properly motivate, like great art. Of course not everyone with the will to excellence actually succeeds in doing great things, but some of them do. And so, from the viewpoint of society, it would seem that we should encourage the will to excellence.

Now that just means that there will be some social pressures to have the will to excellence. Which may make it seem like having a will to excellence is equivalent to being exploited by society, as you would be giving more to society than it gives to you. I maintain that there are also personal benefits to the will to excellence that make it worth having, even if society isn’t structured to reward people who have it properly. One reason to pursue the will to excellence is that having it makes it easier to feel that your life is well lived. I wouldn’t claim that it is required for a life to be well lived, but psychologically the feeling that you are getting better and better at something, and hence the feeling that you are accomplishing something, is a powerful one. And that in turn means that fewer distractions from life are required, making it easier in turn to follow the will to excellence. The will to excellence also has a number of beneficial side effects. Trying to improve yourself requires the ability to think originally, at least eventually, when you “catch up” to the state of the art in whatever your will to excellence is focused on. Similarly being driven by the will to excellence brings with it increased originality and a willingness to stand out from the crowd (since by being better than average at something is already to stand out).

But if the will to excellence is so good then why doesn’t everyone have it? One possibility is that most people just don’t have the capacity for a will to excellence. I think, however, that is an overly pessimistic view of human nature (but really there is no way to tell for sure). Another possibility is that the strong desire to conform to social norms prevents people from developing the will to excellence. Developing the will to excellence means being different from everyone who lacks the will, and it means being different from most of the people with it as well, since they are focused on something different. But I think the strongest factor preventing people from developing the will to excellence is simply that society seems not to encourage it. Society rewards the lucky, but not necessarily those with an inner drive, even when they do manage to accomplish something great. But it remains somewhat of a mystery as to why society doesn’t encourage the will to excellence, given that I established above that we should expect some social pressures in its favor. If I had to guess I would blame democracy. Democracy is based on the principle that everyone is of equal importance. Which means that it discourages thinking of some people as better than others (except when they have more money). Since the will to excellence is not usually rewarded financially society would have to reward people with it by recognizing them as better for having it. But that contradicts the democratic ideal, and so the will to excellence seems to go mostly unrewarded, and hence mostly unmotivated.

* Although you might be led to confuse it with the will to power, partly because Nietzsche can be vague about exactly what the will to power is. In a generous reading of Nietzsche the will to power is primarily a form of self-control, which might seem to overlap in part with self-improvement, as some form of self-control is necessary for self-improvement. However, the will to power is supposed to be present in everyone, which is not the case with the will to excellence. And, more importantly, it is not clear that the will to power really is an ideal. Certainly Nietzsche uses language that would lead us to believe that the will to power is something to be celebrated, but I think that whether that is actually the case receives too little critical attention.

July 22, 2007


Filed under: Epistemology — Peter @ 12:00 am

Ultimately justification must bottom out somewhere. If there wasn’t a foundation for justification then either our claims could never be justified (because it would require an infinite number of steps to justify them), or circular justification would have to be permitted. Obviously circular justification cannot be permitted (because with it anything except an internally contradictory claim can be justified). But it is equally clear that there can be no indubitable foundation for justification, there are no necessarily true axioms that all our claims can be founded upon. But the idea that we need such an indubitable foundation is an outdated one, which stems from confusing justification with a process that brings us to necessarily true knowledge. Justification only has to track the beliefs that it is rational or reasonable to have, not those that are necessarily true. And so we can permit the foundation of justification to be less than certain. Indeed the best position to take with respect to justification is that it rests ultimately on facts that are rational or reasonable to trust tentatively, but which may possibly be shown to be false. Let us call those facts evidence.

But of course simply labeling this foundation with the word evidence doesn’t help too much, since we aren’t trying to stick to the usual meaning of the word. What exactly counts as evidence is probably as nuanced as anything else in epistemology, but it is obvious that there are two major requirements: it must be able to be overturned, and it must not be a judgment. It may seem strange to require claims that serve as the foundation of justification to be able to be overturned. After all, wouldn’t it be better to base justification on facts that are known to be true without any room for doubt? But, unfortunately, we don’t have direct access to the truth, and probably never will. Thus whatever we take as evidence could at least possibly be wrong, without our being aware that it is wrong. But if it is possible for false evidence to be revealed as false then we can at least hold out the hope that if we are wrong we will eventually be made aware of the fact. Or, from a different angle, it is rational to use beliefs that can be overturned as the foundation for justification because by doing so we make our epistemic position improvable over time; although we don’t know how close we are to the truth to start with we know that if our evidence is wrong there is at least the possibility of becoming aware of that fact. Thus given enough time (an infinite amount of time to be precise) we can expect all false evidence to be revealed as false, and hence that our epistemic position will become as good as it can possibly be. On the other hand if we use beliefs that cannot be overturned as a basis for justification then our epistemic position has no hope of improving, and while we might possibly be right we have no way to know if we are right. Hence it is more rational to base justification on beliefs that can be overturned.

Hopefully I don’t need to say any more about falsifability, in this day and age it is like beating a dead horse. The more interesting component to evidence is that is must not be a judgment. In the sense I am using it here a judgment is a conclusion reached from other data. To say why we can’t use judgments as evidence let me first just come out and say what evidence is. For us evidence is the data provided by our senses (which are falsifiable by disagreement between two senses and disagreement with someone else about what we are sensing). Let us now consider a judgment, such as that a particular painting is beautiful. This judgment could be falsifiable; we could believe beauty to be an objective property, and so if many people disagreed with us we would believe that we were in error about the painting. But even if this were the case we still couldn’t use such judgments as evidence. Practically judgments are not guaranteed to coincide, there is not necessarily going to be a consensus as to which paintings are beautiful, and so justification based on them is not necessarily, or even probably, going to be truth tracking. And practically the judgments we make reflect our psychological features and not features of the world. More importantly however is the theoretical objection to using judgments as evidence. The theoretical objection is that judgments reflect the application of concepts. And a concept is not meant to reflect a basic feature of the world (it is not the case that there is some beauty-stuff that is added to the painting in addition to the colors and shapes), a concept is thought of as tracking some kind of arrangement of the basic features (a natural kind). And so concepts, while revisable, are revisable in different ways from evidence, which tracks the basic features themselves. Thinking of something as beautiful is not to say anything about what kind of thing beauty is. Or, in other words, a concept uninvestigated doesn’t have the right kind of content to serve as the basis for justification when reasoning about various things, by themselves concepts only tell us how we think about them. On the other hand evidence is supposed to serve as a claim about how things are (although what evidence we have does say something about us as well). As a closing remark I should note that it isn’t necessarily easy to tell where judgment begins and evidence ends (made more complicated by the fact that a judgment may serve much the same role as evidence does in justification when we have evidence to believe that our judgment reliably tracks certain facts). But this is just one further aspect of the falsifability of evidence, what we thought of as evidence may turn out not to be. And we can determine what is a judgment by learning about how we receive information about the world (how our senses work); evidence is then about the features of the world that we know we receive information about through our senses because those features play a causal role in their operation, anything else (beauty, right and wrong, causation) is a matter of judgment.

« Previous PageNext Page »

Create a free website or blog at WordPress.com.