On Philosophy

November 2, 2007

Social Parasitism

Filed under: Ethics — Peter @ 12:00 am

To be social parasite is to benefit from the work of the group equally while contributing less than an equal share, and thus getting what many would describe as an unfair advantage. Essentially all unethical behavior is a form of social parasitism, we all contribute to the wellbeing of the group by binding ourselves to act ethically, and thus all giving up, roughly equally, on the unethical behavior we would like to engage in. And, as a result of this, we all get to enjoy the benefits of a society composed of ethical people (a functioning society), which is well worth what we give up. Someone who acts unethically in this situation is being a social parasite because they are contributing less to the wellbeing of society by not binding themselves by all the ethical rules, but they still enjoy the benefits of a mostly ethical society, since no one is acting unethically towards them. Of course as a society we try to make such parasitism as unprofitable as possible, by punishing the parasites that we catch. This works fairly well for the kinds of parasitism that we can catch and have the resources to punish. But not every form of social parasitism can be easily identified. Consider, for example, someone who slacks off (someone who contributes less than their fair share in some activity), but does so unnoticeably, so that the only way we can detect this slacking off is by noticing that the group isn’t performing as well as it should. We might try to catch the person engaging in this behavior by switching up the way we divide people into groups, in order to detect indirectly who the parasite is (by drawing a correlation between underperforming groups). But an intelligent parasite can evade detection in this way by slacking off only when their group contains a particular individual, and thus by symmetry it would be impossible to tell who was the parasite. Or two parasites might agree to act that way with respect to the same individual, and then this unlucky individual would seem to be the parasite. Not to mention that the effort spent trying to identity who the parasite is may end up costing more, in the sense of reducing group performance, then the parasite themselves does.

This would seem to imply that certain manifestations of parasitism, those that involve contributing less in essentially undetectable ways, might be successful strategies. And I can’t deny that there may be possible situations in which this kind of parasitism might work, but I suspect that they are fewer in number than might be initially supposed, because of factors that have so far remained hidden. To expose them let us consider a hypothetical farming community composed of 10 members. Each of these 10 members can contribute up to 10 units of work, and for every unit of work contributed one unit of the final product is created. Now this community needs 90 units of the final product to survive, but any remaining product above and beyond that they sell, producing a number of units of income equal to the amount of product they sell, which they divide equally among the 10 farmers. Now let’s consider a hypothetical parasite in this situation. The parasite does feel that the 1 unit of income they receive is of value greater or equal to a unit of labor, meaning that everyone putting in 10 units instead of just 9 units of work (the minimum required to keep everyone alive) is worth it to them. However, it isn’t worth 5 units of work to them, and so they reason that they can simply contribute 5 units instead of their expected 10. Although this will cut down on the income they receive at the end of the year they will still be coming out better overall because they will have avoided 5 units of work and still made some income.

And suppose that they put this into practice. This means that the community will produce:
10*9 + 5 – 90 = 5 units surplus = .5 units of income for each farmer
But it is quite possible in this situation that at least some of the remaining farmers will feel that .5 units of income is not worth the extra unit of work they have to put in. Suppose that one farmer feels this way and, being an honest person, announces this before hand, and says that they will only contribute 9 units of work, but that they don’t expect to receive a share of any income generated as part of a surplus. Then the community will produce:
10*8 + 9 + 5 – 90 = 4 units surplus = 4/9 units of income for each of the 9 farmers (< .5)
As you can see even with an honest slacker (one who reduces the amount they contribute, but who doesn’t expect a share of the communal rewards as a result) the amount of income is still reduced in the next year. And with income reduced again still more farmers may decide to contribute less, until income falls to zero. At which point all the honest farmers announce that they will only contribute 9 units of work, forcing the parasite to contribute 9 as well, or starve. And that puts the parasite in a worse position than when they started, because, as was mentioned initially, that one unit of income was worth one extra unit of work to them.

Now it might seem that the parasite could avoid this slide to a zero surplus by increasing their contribution from 5 to 6 when the first farmer announces that they no longer want to contribute to the surplus. This keeps the surplus the same, to prevent any further farmers from giving up, and the parasite still comes out slightly ahead. But the problem is that this strategy only works if a few farmers defect. If five farmers had defected they would have the increase their contribution to a full 10 units to prevent the surplus from shrinking any further, and this would be an overall loss from them. So whether the parasite can slack off and succeed depends heavily on how much the other farmers value the income generated as a result of the surplus. And that is simply something that parasite can’t know with complete accuracy, so to slack off they are taking a risk, betting that the other farmers don’t care that much about contributing an additional unit of work. And this is simply given the most optimistic situation, that the other farmers will react to a decrease in the surplus by contributing less in an honest fashion. It is quite possible that those who don’t feel that the surplus is worth the extra work after the parasite starts slacking off will simply contribute 9 units without announcing their intentions, which will cause the surplus to shrink even faster.

Of course such considerations don’t matter if the parasite doesn’t think their extra unit of work is worth the surplus that is generated initially, but if that is the case they might as well slack off honestly, announcing that they will only contribute 9 units of work and that they will forego their share of the surplus, because this leaves open the possibility that the surplus might continue to exist because of the work of the other farmers, and that they might benefit from it indirectly even if the income isn’t directly parceled out to them.

What this implies is that if the benefits that accrue from acting ethically are worth the costs of acting ethically then it is rational to act ethically, even if there might be some short-term benefits to being a parasite, because parasitism encourages others to defect from the joint agreement, which can have a run-away effect and leave the parasite themselves worse off than they started. Of course it isn’t guaranteed that this will happen, but the parasite must weigh the risks of this against the benefits of being a parasite, and compare them to benefits that accrue to them from simply honestly participating in the group. A way of looking at this situation is to observe that we would prefer living in a society where people act ethically to one where people are unethical. And if we act ethically ourselves we increase our chances of living in the former, but if we act unethically we make the latter far more likely, because our behavior does not exist in a vacuum, but has the tendency to rub off on other people. But of course this is only the case when the communities of people involved are essentially stable, which prevents parasites from entering communities, getting their short-term benefits, and then moving on. And indeed, if I may offer some social commentary, this seems to happen in certain modern communities, that people don’t contribute as much to the community as a whole because they view their being a member as essentially temporary. But once the “surplus” is eliminated in a community it takes time for it to return, and so parasites acting in this way are effectively reducing the opportunities for other parasites acting in the same way, they are parasites among parasites. And in the long term the outcome is the same, but because they aren’t directly exposed to the effects of their choices such parasites don’t realize this, don’t realize that they are collectively destroying the very thing they plan to be a parasite off of.

October 16, 2007

How To Generalize

Filed under: Ethics — Peter @ 12:00 am

If someone wants to treat their intuitions concerning what they should and shouldn’t do as defining what is and isn’t ethical there is nothing I can say to “prove” them wrong. At best all I can say is that I at least, and many others as well, mean something else when we use the word ethics. I can, however, demonstrate that such an approach to ethics is rather pointless. A theory about right and wrong developed in this way essentially amounts to patting oneself on the back; it serves no possible purpose. In other terms we might say that such a theory fails to resolve the “Nazi problem” or the “monster problem”, which is to say that if we happen to be monsters such a theory will not lead us to be better people. (This problem isn’t restricted to ethical theories either, I would argue that many philosophical methods can’t overcome this problem either, they only have the potential to endorse what we are doing, not correct us if we are wrong.)

Some attempt to solve this problem by accepting the idea that if somehow we could all get from our intuitions, which vary from person to person, to the same ethical theory then the monster problem would be solved. Surely, they reason, if we all arrive at the correct ethical theory then this must eliminate the problem, because the monsters arrive at that theory too. Putting aside for the moment whether such a convergence is possible, even if it was the case that we could all get to the same ethical theory I am not convinced that this would mean that it was the right theory. All it demonstrates is that the intuitions we actually share aren’t too far apart. And certainly that doesn’t make them right, because we can easily imagine a monster world, where everyone has monstrous intuitions. But, unless we are to somehow justify our ethical intuitions themselves, this is essentially the only way that the monster problem can be solved, and so I will simply pretend that such a solution works for the remainder of this post, and focus on how such a convergence might be reached.

One reason that our intuitive ethical judgments might disagree is because of our intuitive acceptance of certain principles that extend our ethical judgments to cases where we are less certain. For example, if one person accepts the general principle that killing is always wrong and another person rejects that principle and accepts that killing can be justified in some circumstances then their ethical judgments will be in conflict some of the time. And it is not clear how such disagreements might be resolved. Such considerations motivate a retreat from general principles. Instead of general principles perhaps what we should take as the intuitive foundations of ethics is specific cases, that is our intuitions regarding whether specific actions in specific situations are right and wrong. Obviously there will be some disagreement about specific cases as well. Thus we must accept that our ethical intuitions about specific cases may be in error as well (or deny that a convergence is possible). However, this is not quite the same problem that we faced when considering general principles, because it is the case that our ethical intuitions with respect to these specific situations come in different strengths. And thus we might argue that while not all of these intuitions can be trusted that for each individual there is a certain number of their strongest intuitions (the exact number may differ between individuals) such that the members of this set are in perfect agreement with the intuitions in the corresponding sets for other people. Obviously to ensure that this agreement actually exists the set for each person must be relatively small. And from this small set, we suppose, we should generalize, which leads to ethical rules that cover all situations, and which may overturn some of our ethical judgments, and which everyone is led to construct by this process since everyone generalizes from sets of intuitions that agree with each other.

If this could actually be done it would be the best possible outcome for doing ethics on the basis of intuition, while it can’t overcome the monster problem in all its forms it does succeed given the assumption that we aren’t completely monsters. But, unfortunately for doing ethics based on intuition, it doesn’t work. The problem lies in the nature of how we generalize from the intuitions in this limited set to more general laws. Obviously there are many ways to generalize from a limited number of examples to more general rules, I am going to consider only two ways of generalizing from them, but the problems facing these two ways of generalizing are shared by all methods of generalization in this context, and illustrate why this method can’t work. First consider the simplest way to generalize, by turning specific features of the situation into generalities. For example, we might generalize from the claim that it is wrong for A to kill B in room Y to the claim that it is wrong for A to kill B in any room. But generalizing in this way can’t be endorsed because it is to effectively slip in unnoticed moral judgments, specifically as to what features of the situation matter. It is equally possible to generalize from that situation to the rule that it is wrong for A to affect B in any way in room Y, by turning some other feature of the situation into a generality. Since people won’t agree about which features are ethically relevant in all cases this way of generalizing won’t lead to universal agreement. We might attempt to advert this problem by instead constructing general rules on the basis of similarity, treating all features of the situations equally, claiming specifically that the ethical status of a situation is the same as the ethical status of the situation in our limited set of basic ethical claims that it is most similar to. This is an objective way to go about constructing general rules (everyone will construct the same general rules), but now it suffers from making highly unintuitive claims. If our basic ethical intuitions are that it is wrong for A to kill B in room Y and that it is right for C to give a gift to D in room Z then we will judge C killing D in room Z to be right because it much more similar to the second situation than the first. Of course we could avoid this problem by making our basic intuitions include fewer irrelevant facts, but then we are faced with the problem we went down this path to avoid: not everyone agrees as to which features of the situation are unimportant and so there will be no general agreement.

And that is the end of constructing ethical theories on the basis of intuitions, unless we are willing to give up on solving the monster problem in any way. Obviously part of the problem with constructing ethical theories on the basis of intuition is that intuitions are highly subjective and variable. But another problem plaguing this approach is that is essentially a descriptive approach to ethics, it aims at creating a theory that labels situations as right or wrong. This contrasts to a normative approach, which focuses on understanding normativity (what we should and shouldn’t do), and then on the basis of that understanding of normativity labels situations as right or wrong. Studying normativity seems to be a better approach because we need only to decide the facts about normativity, a single area of inquiry, rather than a myriad of different situations, where it isn’t clear where a unified theory will come from or how we should get to that unified theory from the essentially unrelated situations.

September 21, 2007

The Collective Ethical Problem

Filed under: Ethics — Peter @ 12:00 am

Groups of people, organized around a common goal, have the tendency to act unethically (in strength proportional to their size) in pursuit of that goal, even if all the people that compose them are individually ethical. This apparent paradox (where does the unethical behavior come from?) explains why genuinely bad people aren’t that common, and yet organizations that seem best described as morally deficient (such as corporations that are seemingly willing to do anything for money) are. What can we do to correct this tendency?

But before I discuss a possible solution let me shed some more light on why groups act unethically. I see the problem as stemming from two sources. One is a misjudgment by members of the group about how the other members will act. Members of the group obviously see the purpose of the group as important, or are at least committed to acting as if it was (otherwise they wouldn’t be members). Thus they are willing to make small sacrifices to achieve that goal, to act in ways that are slightly unethical, because they reason that the goal of the group justifies such small transgressions. But of course everyone reasons that way, and so, as members of the group, many people act unethically on behalf of the group, which has the potential to add up. But this is only part of the problem. The majority of the unethical actions that groups take result from the fact that the common goal shared by their members does not include acting ethically. Thus, as a whole, the group will focus most of its resources on pursing that goal and few on anything else. And pursing a goal the exclusion of all else usually results in unethical behavior. Another way to see this phenomenon is to observe that the members of the group will have different opinions as to how the group should act ethically, what is right and wrong for the group, and what tradeoffs can be made to pursue the overall goal. Thus attempts to improve the ethical status of the group will be sporadic and unfocused, and as a consequence won’t achieve much. In comparison everyone has a pretty clear idea about the stated purpose of the group and how to achieve it, and so the group tends to achieve its stated purpose at the expense of ethics.

Since the unethical nature of groups is an emergent phenomenon we can’t expect telling people about it will correct the problem. Nor is it likely to self-correct, acting ethically is not necessarily to the advantage of the group (although a world in which all groups acted ethically would be), and so spontaneous fixes to this problem are unlikely to emerge and succeed. (Of course individual people were once in the same situation, but, unlike groups, we have our own intelligence, instead of emergent behavior, and thus could decide on a rational basis to act ethically and create systems to encourage such behavior.) Obviously then the solution, whatever it is, must be imposed on groups by a central authority. In fact this is exactly the kind of situation in which governments are most effective.

Given that the unethical behavior of groups arises from the fact that they form around goals that don’t include ethics (just as a person whose sole goal in life was to make money would probably act unethically) one way to fix the problem might be to force groups to add ethics to their explicit goals. Corporations could adopt an ethical charter that spells out exactly what limits there are on the ways in which they will try to make money. Obviously such a charter will not stop corporations from acting unethically, just as making people sign a statement that they will abide by the laws won’t make them obey the laws. But the point of such a charter is not to end wrongdoing, the point is to stop wrongdoing that arises unintentionally. The point of the charter is to focus members of the group on acting ethically as well as the primary goal of the group, and to detail exactly what ethical action entails, at least when a member of that group. Emergent unethical behavior, as I stated above, can come from people willing to act unethically in small ways to achieve the goal of the group. Hopefully directions from above saying not to do that will at least diminish such behavior. More importantly, such a charter will unify the members of the group interested in acting ethically, instead of acting haphazardly to make the group act ethically their efforts will be concentrated by such a charter, ideally resulting in a second emergent force that counterbalances the group’s natural tendency to act unethically to some degree.

To finish let me say a few words on how such an idea could be implemented in practical terms. Obviously it isn’t possible to force all corporations to adopt such a charter, if they catch wind of such a proposal they will collectively fight it, creating opposition that would be impossible to overcome. Thus a more successful approach would be to make adopting such a charter optional, and to allow some leeway as to its exact nature. A panel could be formed that would create standards which any ethical charter must meet, thus allowing individual corporations to create a charter they are at least somewhat willing to adopt. To encourage corporations to adopt charters the government could offer them a tax break for doing so, and might allow them to label their products with a special seal (thus giving them some competitive advantage). And this would make corporations push for the proposal rather than oppose it. Finally inspectors would have to be hired who could make sure that corporations were doing more than adopting the charter in name only, which would include enforcing it with the same degree of effort they spend trying to catch people who embezzle or cost the corporation money in other ways. Individuals might also be allowed to bring suit against companies that violate their ethical charter.

And that’s my proposal. While it might not be perfect it is better than what we have in place now for curbing the unethical tendencies of groups: nothing.

September 19, 2007

Drugs In The Water

Filed under: Ethics — Peter @ 12:00 am

Should we put drugs in the water? I think there is something obviously wrong with the idea, as evidenced by the fact that no one actually puts drugs in the water. There is some bit of common sense that says, “no, that is a bad idea.” However this common sense may be hard to defend, depending on what kind of ethical theory that you subscribe to. Don’t we want to make other people happy? If so then what is wrong with a nice treat for everyone?

Of course drugs are bad, but no one is forcing us to put dangerous drugs in the water, or even high levels of them. I’m sure that if we really wanted to we could find a drug with few negative effects and dilute it so that there is a noticeable but safe effect (and, additional, one which people don’t become acclimatized to). If happiness is good for people then how can we object to this? Now some might divide happiness into different kinds, and claim that some forms of happiness are better than others. That claim is, obviously, rather absurd (since it is hard to justify given that they all feel the same, and the fact that happiness feels desirable is the very thing that the idea that it is good is based on), but even if it was true it wouldn’t affect our calculations. Sure, if we could give people some superior brand of happiness they might be even better off, but we certainly aren’t doing them a disservice by giving them an inferior kind of happiness. And thus we would still have a reason to put drugs in the water.

However, our goal when acting ethically is usually thought of as to achieve what is good for people. The only reason that making people happy seemed like an ethically worthy task was that happiness intuitively seems good for people. I would claim the contrary though, that in many situations happiness is bad for people. Which is not to say that happiness is intrinsically bad, as I see it happiness can be good or bad depending on the situation. Happiness is a reinforcement mechanism, when we experience happiness it leads us to keep acting in the ways that brought about that happiness. And consciously this manifests as a desire to be happy. But just because we are built to seek happiness doesn’t mean that obtaining it is good for us; we function best when we are only happy appropriately, when happiness reinforces productive behaviors. Inappropriate or undeserved happiness can reward undesirable behaviors, and is thus to be avoided. Of course by undesirable behaviors I mean from our current point of view; from our current point of view we have various goals, or, in other words, there is a certain kind of future person we would like to be. Happiness is to be desired then only when it reinforces behaviors lead us to become that future person; otherwise happiness may lead us astray, into becoming someone we didn’t want to be. An example of this effect is how video games and TV can kill the productivity of people who enjoy them. If you enjoy them then watching TV or playing games can become reinforced at the exclusion of other activities, because it is easier to achieve happiness through them. But few people’s vision of their future selves includes watching TV or playing games, and so reinforcing these activities may stunt activities that better suit their goal.

So, to return to the idea of putting drugs in the water, making people always slightly happy would diminish the reinforcing power of happiness where it is appropriate (because the difference between the happiness that comes about as a result of certain behaviors and the person’s normal state of mind is diminished). And, at the same time, a constant low level of happiness reinforcing simply doing nothing (if you could thoroughly enjoy yourself just staring at the corner, well, why wouldn’t you). But neither is adding depressants to the water beneficial. Making people less happy also has the tendency to result in them doing nothing, because if it is hard to feel happy then no behaviors are reinforced.

And now we can return to the question of what is best for people, as it certainly isn’t happiness. What it is will probably depend on the person’s individual goals, that is, the nature of the person they want to become. And thus the best thing that we can do for someone else is to assist them. Of course I don’t mean that we should accomplish their goals for them, the whole point of a person’s goals is that they want to accomplish them. Assistance then means helping someone accomplish their goals. It’s possible a hands on approach could work, but we can be equally helpful simply by facilitating them, providing them with the power to reach their goals and eliminating obstacles that would otherwise prevent that. Perhaps that assistance will end up making them happy, but our assistance might very well involve making them unhappy as well, for example if we force them away from a distraction.

Of course this entire discussion is predicated on the idea that acting ethically is helping people have what is good for them. This is not actually a position I would take, I would argue instead that what is ethical is helping society to obtain what is good for it. And often, almost always, helping the people who compose a society is good for society. But not in absolutely every case, because what is good for some people, what they want at a most fundamental level, may be bad for society. And then we must choose whether we are to do what is good for society or what is good for them, with the correct choice being society as a whole.

September 10, 2007

Redressing Past Crimes

Filed under: Ethics — Peter @ 12:00 am

If one individual wrongs another we generally agree that if possible that wrong should be righted, i.e. that the harm done to the individual should be undone as much as possible, and that the guilty party should be deprived of any benefits that followed from that crime. For example, if I steal your TV, and later that fact is discovered, then it is clear that I should be forced to give your TV back to you, or buy you a new TV of equal value if that isn’t possible. And generally the longer the wrong goes unaddressed the more we need to do to repair that wrong, or so it would seem. If my theft of your TV has prevented you from watching your favorite shows for a week then not only do I need to give you your TV back, but I need to somehow compensate you for the fact that you missed your shows as well. Of course deciding what is fair is often hard, and sometimes it isn’t possible to put things right, but it is an ideal we strive for.

Things become more complicated when third parties become involved. Obviously in the case of past crimes I am interested in third parties who happen to benefit, by inheritance, from wrongdoings. I claim that such inheritors do not need to give up any advantages acquired by inheritance in this way. My reasoning on this issue primarily follows from the fact that it serves no purpose to punish people (by taking away whatever advantages they have gotten in this way) who haven’t actually done anything wrong themselves. Of course those who think they should have such gains be taken away might reason that allowing people to come into advantages in this way is unfair. If our parents both had $10 and my parent stole $5 from your parent, so that we were born with $5 and $15 respectively then it seems unfair to you that I get more to start with. But the same inequality could exist if my parent simply worked three times as hard as your parent. If one situation is unfair then so is the other, because in neither case did you or I do anything to merit the inequality. I am actually sympathetic to the claim that it is an inequality that should be addressed, but I think the inequality lies in the ability to come into advantages based on birth alone, and not where the inequality comes from. As long as we allow people to be unequal at birth then it makes no difference how those inequalities came about.

And we can extend this reasoning to cover third parties benefiting from wrongdoing in general (given that the third parties are actually third parties, and not accomplices). Suppose then that I steal your TV and sell it to someone else, or possibly give it away, and that the people who receive the TV have no idea that it is stolen. I would maintain that we can’t force these people to give it back to you. What we can do is punish the criminal, forcing them to compensate you with something of equal value even if it isn’t the TV itself. It is then up to you to get your TV back from the person it ended up with. The reasoning behind this is that, like inheritance, whether someone gives you a TV, or sells it to you at a cheap price, is a matter of luck. These third parties haven’t done anything wrong, and so by punishing them you are effectively punishing them for having good fortune. And that doesn’t really make sense. Punishment exists to discourage bad behavior. Do we really want to discourage people from buying things or from accepting gifts?

Using this kind of reasoning I would argue that modern US citizens aren’t obligated to give back the land their ancestors stole from the Native Americans. The people alive today are neither those who did the stealing nor those who were stolen from. And thus trying to “give back” the land would be another kind of theft. Of course there are some who would disagree, reasoning that individuals involved in the initial wrongdoing were not people but societies, that the white/protestant/Andrew Jackson society stole from the Cherokee/Creek/etc society, and that since these societies still exist it makes sense to return the ill-gotten gains of the first to the second. However, I think the analogy between societies and people when it comes to questions of ethics is an illegitimate one, a kind of category mistake. In this specific case where the analogy breaks down is with punishment; it doesn’t make sense to punish a society. The purpose of punishment is to discourage certain behavior. But societies cannot systematically be punished (although governments, specifically the people who compose them, might be), to the extent that the people living in them use that knowledge to guide their political decisions. If the US returned land to the Native Americans it would be a voluntary decision on its part, and would not serve to set an example for other nations. And of course another difference between the two scenarios is that Native Americans are free to become US citizens and thus enjoy the stolen land as much as anyone else.

Of course this doesn’t mean that the Native Americans might not have complaints about their current situation, or want their land back. And an important part of justice (note: not ethics and punishment, which is what we were concerned with previously) is reaching a state of affairs that all parties can bring themselves to accept. For example, some Native Americans might feel that the land in which they can govern themselves as independent nations isn’t suitable (remember, the Native Americans were often resettled in the least desirable places). And if both sides agree that it isn’t fair for them to be stuck in an area they don’t want to live in the government might buy land in other areas from the individuals who currently own it, and convert it into a reservation. Or, alternately, some Native Americans might want to secede from the union completely, which might also be agreeable to both parties. In any case, such matters are generally not settled by appeal to overarching principles, but by reaching compromises based on the desires of the parties involved, as are many matters of justice.

« Previous PageNext Page »

Blog at WordPress.com.