On Philosophy

July 31, 2006

The Turing Test As Evidence For Consciousness

Filed under: Epistemology,Mind — Peter @ 12:37 am

Yesterday I discussed, briefly, how we judge the effectiveness of evidence towards proving or disproving a hypothesis. Let us now apply those standards to a specific piece of evidence, the Turing test. Many people think that the passing the Turing test would be good evidence that a program is conscious. Instead of arguing the merits of behavior as a measure of consciousness, which is usually how the Turing test is thought of, we can use our knowledge of probability to give us a more precise idea of how reliable it is.

Before we get started however there are a few preliminaries that I need to get out of the way. For starters let me note that an effective test for consciousness needs to perform two functions well. First, given a program that passes the test we must be able to have a high degree of confidence that it conscious. Secondly, and of equal importance, is that given a program which fails the test we must have a high degree of confidence that the program is not conscious. If our test doesn’t meet both requirements it indicates that our test has standards for consciousness that are too high (many conscious programs fail the test) or too low (many program that are not conscious pass the test). The reason that I mention this is because in general a good test for anything divides the possible candidates into two groups, those with the quality and those without, which is what the performing the two functions above well would guarantee. A second point I need to mention is that I hold consciousness to be largely independent of intelligence (see here), a fact I will make use of extensively below.

Let us then first examine how likely a program is to be conscious given that it passes the test ( Pr( C | P-T ) ). From the definition of conditional probability we know that Pr( C | P-T ) is equal to Pr ( P-T & C ) / Pr( P-T ). These numbers are awfully small, too small to be able to intuit the resulting probability directly, but there is a way to guess roughly what it must be. Specifically the closer the number of programs that can both pass the test and are conscious approaches the number of programs that can pass the test the closer the probability of Pr( C | P-T ) will be to one (because if number of program that are both conscious and can pass the test approaches the number that can pass the test Pr ( P-T & C ) will approach Pr( P-T ) ). I would argue that the number of possible programs that can pass the test but aren’t conscious is fairly high (for example programs that are designed with sufficient information about human psychology that they can produce the sentences most likely to deceive us, ect), leading me to conclude that Pr( C | P-T) is significantly less than one, probably between .6 and .9, which means the Turing test is decent as evidence that a program is conscious, but far from perfect.

The more interesting case is when we examine how well failing the Turing test disproves the hypothesis that a program is conscious ( Pr( ~C | ~P-T) ). Just working from the assumption that many conscious programs couldn’t pass the test (specifically the ones with low intelligence or ones that are unusually honest) we can see that this probability will be rather low. This means that even if a program fails the Turing test there is little additional reason to believe that it lacks consciousness, and thus that we should rely on other methods to determine if it is or isn’t conscious.

So what do these findings tell us about the Turing test? Well given that it passes only half the requirements (sort of) I would say that it isn’t a very good test of consciousness. It is, however, a good test of human-like intelligence, I will give it that. This of course leaves us in need of a better test of consciousness, one in which no conscious system would be marked as non-conscious, and where no non-conscious system would be mistakenly thought to be conscious. Feel free to leave a suggestion.

Advertisements

July 30, 2006

Falsificationism and Verificationism

Filed under: Epistemology — Peter @ 1:24 am

Verificationism is usually associated with the philosophy of language. However we can also see it is a way to conduct science (since science aims at the truth, and verification is supposed to be evidence that a statement is true). The problem with verificationism (or so it is commonly thought) is that some statements are “universal” in the sense that they make claims about a possibly infinite set of objects. Since it isn’t possible to verify that the statement is true for each of an infinite number of objects it seems that verification is impossible. Because of problems such as this falsification was proposed as a way to conduct scientific investigations. A scientist working under falsificationism attempts to find cases where the “universal” claim is false, and if no such counterexamples can be found the hypothesis is accepted as provisionally true. Thus under falsificationism we should test the consequences of a theory that we consider most unlikely (for example cases where it contradicts established theories) in order to make the best attempt to falsify it.

However perhaps we shouldn’t give up verificationism so easily. If we look at verificationism from the perspective of probability theory it recommends the exact same kind of investigations as falsificationism, showing that it might be a decent way of approaching a scientific investigation after all. We formalize verificationism with probability theory by treating every piece of evidence as confirming the hypothesis to a lesser of greater degree (note that the degree of confirmation is always positive, but a degree of confirmation less than one implies that the evidence is disproving the hypothesis). We figure out exactly how much more we should trust our hypothesis given the evidence we have collected by applying Bayes’ theorem. I won’t give you the full mathematical run-down, but suffice to say that in order to achieve the greatest degree of confirmation for a theory it is necessary to look for evidence that the hypothesis predicts is likely but that is considered generally to be unlikely, and as mentioned above this is the exact same recommendation that falsificationism gave us.

So if both falsificationism and verificationism endorse the same kind of investigation which one should we prefer? I personally think that verification, understood as the search for evidence with a degree of confirmation greater than one, is a superior way of looking at the scientific process. Besides encouraging us to look for evidence that is likely to falsify the theory (which proponents of falsificationism have shown to be a good idea on numerous grounds) it also provides a way to show several other intuitive ideas about science to be true. For example verificationism as interpreted here shows that you can never prove that a hypothesis to be true beyond the possibility of doubt (the probability of the hypothesis is always less than one no matter how much evidence you have), and that you cannot disprove a hypothesis unless it predicts that some events are impossible. Finally it also provides a way to judge which of two hypotheses is superior if both of them have only supporting evidence and no counterexamples (you determine which hypothesis is confirmed to a greater degree). Does this really matter to scientists? No, but it does impact how we think about epistemology.

July 29, 2006

What Do You Believe?

Filed under: General Philosophy — Peter @ 12:21 am

Beliefs are a hard thing to pin down. It is easy to say one thing while believing another. It is almost as easy to say that you believe something, and honestly think that you believe it, and yet have your actions prove otherwise. Psychologists discovered long ago that the best way to see what someone actually believes, and how strongly they believe it, is to see what they will risk on the belief being true, and what opportunities they forsake for it. It is this kind of research that has revealed that if people tell themselves something often enough they will come to think that they believe it, even when their actions betray that really they believe something else. I think many people’s religious “beliefs” fall into this category; there are strong social pressures from childhood onwards to profess belief in one religion or another, but few people act as though they really believe the words that they are saying.

Specifically I am referring to the possibility of eternal judgment and punishment (or rewards) based on a person’s behavior. If a person really believed this shouldn’t we expect them to do the right thing almost perfectly? After all the possible consequences seem much more significant than any possible immediate reward from bad behavior. Of course there are several possibilities for this seeming dichotomy between professed beliefs and actions.

1: Intention
One possibility is that people who hold these beliefs also believe that the intentions one has are the only relevant consideration in the final judgment, and thus as long as they keep their minds pure they don’t have to worry about their actions. I guess this is a possible belief system, but I think that few actual religions would endorse it (remember the saying: “the road to hell is paved with good intentions”). It is generally accepted that a person can be good only if they perform good acts and have the appropriate intentions, and thus I will discard this possibility as unlikely for most people of most religions.

2: Forgiveness
Another much more likely possibility is that these people believe that they also can be forgiven for their bad actions, and thus escape punishment. However, when coupled with the fact that these people make the wrong choice on a regular basis, forgiveness doesn’t seem a likely possibility. Here is why: generally it is accepted that one can be forgiven for wrongs only when the wrongdoer has the honest intention to do better in the future. However if a person makes the wrong choices at the same rate after asking for forgiveness then clearly they didn’t actually intend to do better (possibly because they do believe that they will be forgiven whenever necessary). Thus a person who believes that forgiveness can be a safety net for their mistakes and then feels free to act badly won’t be forgiven, because they aren’t truly remorseful. Forgiveness makes sense for the good person who makes the occasional mistake, but not for the majority of the religious, whose beliefs I am questioning. (If you do believe that people don’t need a real intention to do better in the future in order to be forgiven this makes forgiveness essentially a license to do whatever you want, and thus any ethical standards become largely irrelevant, a strange position for a religion.)

3: Temptation
A third possibility is that people believe they can act poorly, and not be punished for it, because then can blame their actions on unavoidable temptation by their desires, which in turn removes (or greatly reduces) their moral responsibility. This is much like the insanity defense, which, in the ethical realm, says that when the causes of the person’s behavior are not mental aspects that can rightly be called part of a rational agent (neuroses) the person isn’t morally responsible. Once again this may be a logically consistent defense, but it doesn’t seem likely that it is what the religious believe, since if they did they would feel free to sin all they wanted, without guilt, and then simply blame their actions on their uncontrollable desires.

4: Self-Deception
The final possibility then is, as I mentioned initially, that people who profess to hold religious beliefs but act in morally questionable ways aren’t actually believers. How much morally questionable behavior should indicate that real belief is lacking? I would say any regular behavior or any behavior that results from “giving into temptation”, because clearly if a person truly believes in possible punishment after death they would do everything they could to avoid it, and no amount of temporal benefits could convince them to risk such a horrible fate. If you feel that you are religious I ask you to honestly examine your actions. Even though you are strongly convinced that you do believe that is not enough to guarantee that it is really so, after all there have been murderers and other obviously immoral people who have likewise professed to believe, and in all likelihood felt the same way about their beliefs. The conclusion you should reach is not that you should necessarily give up trying to believe, simply that if you want to claim that you are believer your actions should be in accordance with that statement or you should give up the pretense.

I know that some people are going to take this as a personal attack, and I’m sorry if you feel that way, but I’m turning off comments on this one for a few days so that no one is tempted into saying something rash.

July 28, 2006

More On Community Ethics

Filed under: Ethics,Political Philosophy — Peter @ 12:40 am

This post expands on the ideas presented in these posts: 1, 2, 3, 4, but you don’t need to read them first. All you need to know are the two principles that are the foundation of this ethical system: it is ethical to act in the best interests of the community and unethical to act against those best interests (other actions are morally indifferent). Additionally it is unethical to remove someone from the community against their will, which prevents ethics from mandating that we treat some people badly in order to benefit the rest of the community, because this would be equivalent to removing those people from the community.

Today I will answer the questions “what kind of social structures should we aim for if we want to be maximally ethical?” and “how should we treat people not in our community?”

1: The Problem of Politics

Most ethical codes judge social systems to be more or less ethical depending on the kind of rules they create. For example almost all ethical systems would judge tyranny as a bad way of structuring society, since tyrants will generally make rules that promote only their own wellbeing, and are likely to be disastrous for the people living under them. So what kind of political system would the ethics presented above favor? Well obviously one that encouraged compromise, but unfortunately I have no idea how to create a system that mandates compromise, or is able to determine when the compromise is fair. At this point you may think I am going to claim that democracy is the most ethical system, but I’m not. Democracy violates both ethical principles given above. Democracy has the power to effectively cut people with minority views out of process of government, which goes against the second principle. Additionally the majority does not necessarily know what is best for the community as a whole, and even if they know it they may not choose it. Democracy is suited to a system of utilitarian ethics, where we strive only to increase the total happiness, and little else. What we want is a way to guarantee that the best decisions for the community are being made. The best decisions are generally made by a single well-informed person (since an expert can almost always make better judgments than the majority), who is properly motivated. Thus I propose a conditional monarchy. By this I mean that one man or woman is given unlimited power (except where constrained by certain universal human rights), but they can only keep the office (and the power) if they maintain some kind of community wellness index above a certain level. Designing the wellness index itself is a task best left to social scientists, of course. Selecting the new ruler when an old one fails can be done by any number of methods. If you think that such a system isn’t ethical I ask you to seriously consider why you feel that way. Do you believe that ethics mandates that we give everyone power? Generally I think that power should be given on the basis of ability, and not given out to those who may use it poorly (such as the majority).

2: The Problem of Strangers

A potential problem with the ethical system we have been considering is how it mandates that we treat strangers. Since a stranger isn’t a member of our community do we have ethical license to treat them as poorly as we wish? I would argue that we don’t, not because the stranger is currently a member of the community, but because they are potentially a future member of the community. In essence this is the reverse of the principle that we shouldn’t remove anyone from the community, the principle that we should expand the community by adding new members whenever possible. Adding members to the community is beneficial because their wellbeing is added to the wellbeing of the rest of the community, and thus everyone ends up better off (generally of course, see the post on punishment). An exception to this is when resources are scarce and the community can’t afford to support any more members, but in that case does it really seem unethical to chase off strangers? Another exception to the derived principle that we should add new members to the community is the case of having children, because initially newborns don’t add anything to the wellbeing of the community. As a whole the community needs enough members to survive, but not so many that its resources are depleted, and thus ethics can’t mandate the principle that no one have children or that everyone have six or seven, for obvious reasons. However, since people have a varying number of children, a couple deciding on their own to have many or no children isn’t a problem. And thus it seems to me that the benefits to the community in most cases of potentially adding a new child are pretty much a wash, leaving it up to individual couples if they want to have children and how many they want to have.

July 27, 2006

On Knowledge

Filed under: Epistemology — Peter @ 12:01 am

The ability to recognize knowledge and the ability to acquire knowledge can be seen as the foundation of all our beliefs about the external world. Typically knowledge is defined as justified, true, belief. It has been argued that this definition is imperfect before; here I will give further reasons to believe that we need a new definition of knowledge, as well as propose such a possible replacement.

Let’s start with the requirement that knowledge be true. This seems reasonable at first, but, considering what we actually demand of the statements that we call knowledge, it defeats the point of labeling a thought knowledge. This is because knowledge is supposed to be how we discover truths about the world. However if part of knowledge is a requirement that it be true we could never be sure if a statement was knowledge, because to know that it is true me must have knowledge about it, but to have knowledge about it we must know that it is true, … ect. For example consider two scientists examining a ball in a box. One scientist takes the ball out of the box, observes its color and then places it back, and the other scientist does the same. Now suppose those scientists disagree about the color. It is true that given our third party knowledge of the situation we are able to say which scientist has knowledge and which doesn’t (given that it is our thought experiment and we really know the color of the ball). This doesn’t help the scientists themselves however. Their beliefs about the color of the ball are equally justified. They could perform experiments that justify one color belief more than the other, but there is nothing they can do to guarantee that it is the truth, and thus they can never be sure that they have knowledge about the color of the ball. This seems to defeat the point of having knowledge, because having knowledge is something that is supposed to help us get at truths in the external world, but it seems like we can never be sure that we have knowledge, and hence must forever be doubtful of the external world.

Likewise the requirement that knowledge be truth would lead us to conclude that most scientific theories weren’t knowledge. Certainly past theories weren’t knowledge, since modern scientists have shown them to be false. However given this track record of improving theories it is likely that the current theories will be discarded as well, and hence aren’t the truth (although they may be good approximations). If we accept the standard definitions of knowledge it seems like we should conclude that scientists never have knowledge about the things they study, and since this is the exact opposite of what we should conclude it is another good reason to discard the standard definition of knowledge.

Another aspect of the standard definition of knowledge that we might find fault with is the requirement that it be a belief. This is perhaps due to the shifting nature of what exactly we consider a belief, but let us go with the following definition: a belief is a disposition to act in a certain way. For example if you believe that the moon is made of cheese you will tell other people that the moon is made of cheese, and you won’t hesitate to take a bite of it given the chance. Reflection upon the nature of belief has led many people to think that beliefs are best demonstrated though the choices a person makes, and not what they say. For example if you claim that you aren’t scared of mice (thus telling us that you believe mice aren’t scary), but then scream and run away when you see one we are justified in thinking that you really do believe mice to be scary. Consider then the following situation: an engineer has designed a new safety device for a machine, and he or she has proved without a doubt, from well tested principles, that it is impossible for it to cause an injury. We then ask the engineer to test that safety device, say by sticking their hand into the machine. I think it is quite reasonable for engineer to refuse. This however demonstrates that he or she doesn’t believe that the safety of the device is absolutely certain. However the principles that its functioning is based on are so well proven that we conclude that they do have the knowledge that it will work, simply not the belief that it will. Because of situations like this I think the requirement that knowledge must be a belief should be dropped as well.

What does that leave us with? Well, from the above criticisms I think we should conclude that knowledge is simply a justified statement, and even though we have done away with two thirds of the standard definition it is still reasonable to consider knowledge as leading us to the truth about the external world. First though it is important to clear up exactly what we mean by justified. There are three basic ways a statement can be justified. One is that it can be shown to be a tautology, and although this is the most important case for mathematical knowledge I will ignore it for the remainder of this post, since tautologies tell us nothing new about the external world. Another possibility is that the statement can be supported by evidence, in a Bayesian fashion. I call this possibility original justification, and a statement that is supported by original justification strong-knowledge. Finally, a statement may be justified by showing that it is a consequence of other statements that are considered to be knowledge. I call this possibility dependant justification, and statements that are supported in this way weak-knowledge.

As the above categories show it is hard to define knowledge as being something that we either have or don’t have. Instead we become more or less certain of a statement by making observations (by having stronger or weaker justification), but we can never be completely sure that it is true or false. This means that we can never completely remove the possibility of error, but, as the example given above with the scientists attempting to determine the color of a ball showed, we shouldn’t expect that absolutely certain knowledge is possible anyways. The possibility of error does not reduce the usefulness of knowledge, or its validity as a reason for action, nor should it give us reason to doubt the possibility of knowledge or the existence of an objective world.

Secondly the strong-knowledge / weak-knowledge distinction is important when considering to what extend we should rely on a statement, and how we should talk about knowledge in general. Sometimes when people refer to knowledge they really mean strong-knowledge (for example when we say things such as: “you can’t have knowledge about the future”). Strong-knowledge, in general, is more useful, and more reliable than weak-knowledge. For example you may think that a specific weak-knowledge claim is highly probable because the strong-knowledge claims that it is based on are confirmed to a high degree. What this doesn’t take into account though is that there may be other factors at work that make your claim false. For example we can have strong-knowledge regarding the claim: “only men have beards” however we can’t deduce the claim “women can’t have beards” with the same degree of certainty because we have omitted another factor, namely that women who have beards remove them due to social pressures. This also explains why the engineer mentioned above may have been justifiably hesitant to try their proven safety measure; they had only weak-knowledge concerning it, not strong knowledge. It is too easy to make mistakes when deducing weak-knowledge from strong-knowledge, which is why I suggest that when we think about investigations into the truth we concern ourselves primarily with strong-knowledge (even though we use weak-knowledge on a daily basis to get by practically in the world).

You may be inclined to point out now that many strong-knowledge beliefs rely on other strong-knowledge beliefs in order to connect their evidence to their theory. For example if you measure the light from the sun in order to collect evidence for a hypothesis about its temperature you conclusions depend on strong-knowledge about the connection between radiated energy and temperature. I would say that this does indeed make the boundary between strong and weak knowledge a bit gray, as it should be. Generally we create standards of certainty (very high standards) which if a statement makes can be used as part of the procedure by which we determine if evidence confirms or denies a statement. However no matter how weak you make these standards there are some statements that can never be moved from the domain of weak-knowledge to that of strong-knowledge, for example claims about a future event, since one can only collect evidence after the event has already happened.

Am I absolutely sure that this definition of knowledge is satisfactory for all cases? No, but I have confirmed it to a sufficient degree to consider it knowledge, until something superior is proposed.

Next Page »

Create a free website or blog at WordPress.com.