On Philosophy

March 25, 2008

Paper: The Pragmatism of Simple Agents

Filed under: Epistemology,Papers — Peter @ 12:00 am

Peirce’s project, at least as it appears in The Fixation of Belief, is to demonstrate the psychological plausibility and inevitability of the scientific method. By appeal to a conception of doubt as a mental irritant, and to our natural tendency to compare our beliefs to those of other people, Peirce argues that the scientific method is, ultimately, the only way to rid ourselves of doubt that is consistent with those dispositions. He then buttresses this picture with an appeal to a specific conception of reality, and, in How to Make Our Ideas Clear, of truth as well. This picture is, on one level, very satisfying, because it explains why we are psychologically certain of the scientific method, even if it isn’t epistemologically immune to skeptical scenarios, and even if it falls short of Cartesian standards for belief. But Peirce’s account is not completely free of problems; we might complain, for example, that he has left no room for objectivity or that his conception of reality seems both unnecessary and overly metaphysical. More significantly, we might worry that his account is lacking a much needed defense of the scientific method in more than psychological terms. Defending the psychological acceptability of the scientific method is certainly important, but just because we as a species have a tendency towards the scientific method, or that the scientific method satisfies our inquisitive urges, doesn’t necessarily mean that the scientific method is any good, objectively speaking. Addressing these problems requires a perspective from which we can make observations about the scientific method in action, both in order to justify it, and to speak about it in terms that are distinct from the sometimes confusing details of human psychology. To that end I propose considering very simple agents and observing what methods are productive for them. Because of their simplicity it will be easy to understand the structure of their inquiries and it will be easier to argue that some particular strategy for inquiry is better than another. And from this simple account I claim that a clearer picture of how our more complicated scientific method works and why it is successful for us will emerge.

1: Peirce’s Project

Peirce’s account of the scientific method is based on a theory about the human psychology of doubt. He claims that with respect to particular opinions we can be either in a state of doubt or of belief and that we seek to be in a state of belief and to avoid doubt. But of course we can’t just decide whether to be in a state of doubt or belief, and so Peirce further supposes and that what puts us into a state of doubt or belief is, at least in part, dependant on the standards for belief and doubt that we have come to accept. We might think of such standards as a measure of “fit” between a particular opinion and the rest of our beliefs and observations, with what matter in terms of “fit” being a combination of our innate psychology and learned dispositions. Admittedly, with respect to the claim that what makes us believe or doubt may differ from person to person, this may not, at first glance, seem to be Peirce’s position. After all he criticizes the Cartesians, who claim that we must doubt our beliefs if certain skeptical scenarios can be constructed. Peirce says that such doubt is imagined, not real, and thus is not a proper motivation for inquiry and the revision of our beliefs. And he will appeal to this fact in defense of the scientific method, arguing that it does not have to demonstrate its conclusions, only ensure that there are no reasonable grounds for questioning them. Such claims might seem to imply that grounds for belief and doubt are a fixed part of our psychology. But that would contradict what Peirce later says concerning alternative methods for belief fixation, namely that the people who accept those methods are actually guided in what they believe and doubt by them, such that someone who accepted authority would be put into a state of belief by pronouncements from that authority, unlike us, who would still doubt. This does not necessarily contradict Peirce’s claims about the imagined doubts of the Cartesians, but what it reveals is that such claims must be made from the perspective of someone who accepts the scientific method as a guide for belief and doubt, as a defense against a Cartesian criticism of the system; they are not meant to apply to true Cartesians, who may actually doubt.

However, allowing such flexibility in what leads us to believe or doubt might seem to make Peirce’s project impossible. It could be that the standards of doubt accepted by some people are such that they simply don’t care about having their theories agree with experience, and so such discrepancies may not cause them to enter into a state of doubt. Peirce takes the method of tenacity, the refusal to acknowledge evidence contradictory to ones beliefs, and the method of authority, taking the pronouncements of certain authority figures to be the only evidence worth considering, to be examples of such systems. If that was all there was to it then we would still be stuck with the method of tenacity or of authority, since the motivation to revise them would be lacking. It would also undermine the validity of the scientific method. Even though Peirce isn’t setting out to defend the scientific method from an objective standpoint he is trying to show that it has a certain kind of universal psychological appeal; if it turned out that the scientific method wasn’t universal than that would seem to make it substantially the same as the method of tenacity or authority. So, to explain how we have managed to escape such systems, and to illustrate a difference between the scientific method and the alternatives, Peirce maintains that there is a common conception “that another man’s thought or sentiment may be equivalent to one’s own”, and that it is an impulse too strong to be suppressed (116-117). Although what puts us into a state of doubt or belief is somewhat mutable Peirce seems to be taking our respect for each other’s beliefs as something with a special status that is more widely shared and less flexible. Since the methods of tenacity and authority don’t lead to agreement, at least not in all places and times, people may thus be led to doubt the validity of those methods because of that sentiment, and eventually come to what we would deem to be better strategies for belief fixation. In contrast, the scientific method takes seriously other people’s beliefs, since one of its core premises is that inquirers will eventually be led to the same conclusions, and hence is more psychologically acceptable.

Peirce realizes though that the belief that the scientific method will produce the same results for everyone simply can’t be sufficient on its own. Obviously any method for belief revision could come with the belief that it will produce universal results. The a priori method in philosophy seems to be brought up by Peirce as an example of just this phenomenon: the philosophers believe that the a priori method will produce universal results because of their faith in the special nature of reason, but that faith disappoints them. Thus Peirce is moved to think that mere belief in the convergence of opinions is insufficient; we must have a reason to think that the method really will produce agreement. Because of the failure of the a priori method Peirce concludes that to produce agreement it must be an external permanency that guides us, not features of our own minds. Thus he introduces reality as just such an external permanency, which is unaffected by our thoughts, and which affect the senses of everyone in the same law-like ways. Because acceptance of reality, so conceived, is part of the scientific method, he claims, we can thus rest assured that the method will lead us to agreement. Of course Peirce admits that, so defined, we cannot directly argue for the existence of reality. However, he has four arguments that he thinks should lead us to just accept this idea without proof, which revolve around arguing that we won’t be led to doubt that conception of reality or that we are already committed to it in some way. It is unclear what the point of these arguments is given the shape of Peirce’s project. Since he seems to be arguing that the scientific method has a unique kind of psychological acceptability whether we should accept the existence of reality seems to be beside the point. Even if that conception of reality was necessary for the acceptance of the scientific method it would appear that he should argue that we do or will be led to accept it, not that we are rationally required to accept it. Perhaps then we can charitably interpret these arguments as designed to reveal the undeniable psychological plausibility of the notion, as further evidence that the scientific method is a good fit given our psychology.

Since a concern with truth also seems to be a deep rooted and universal psychological inclination it would seem natural for Peirce to conclude his account with a description of how the scientific method includes a plausible conception of truth and how that conception strengthens the attractiveness of the method, paralleling the discussion of reality. But in terms of The Fixation of Belief, the Peirce paper which I am primarily concerned with here, it is not clear what position Peirce took on truth. He does make a few leading remarks at the end, stating that the scientific method is the only one that can go wrong, which he indicates we should find more plausible than a method that can never go wrong. But, while this suggests that the scientific method has a unique understanding of truth or a unique relationship to it, it does not amount to an account of truth. It would fit the sprit of the paper, I think, if Peirce had asserted that we are only comfortable with an idea of truth where true statements correspond in some way to reality and thus that we should simply accept truth as such and, furthermore, believe that the scientific method will produce claims that have the right sort of correspondence to reality, even though that correspondence can never be checked.

Peirce does take a position on truth in How to Make Our Ideas Clear, but it is not at all like his discussion of reality. There he indicates that truth, in general, is defined in terms of the method that people accept, such that whatever their method leads them to believe they call true. Thus in terms of the scientific method Peirce states that truth amounts to the opinion that is “fated ultimately to be agreed upon by all who investigate”, because, given the standards of objectivity that are part of science, we will only be satisfied with a belief that all competent investigators concur about. Obviously such a belief would make the scientific method slightly more psychologically acceptable, since we would prefer our method to lead us to the truth, but it doesn’t really distinguish it from any other method, since they all call true what they lead people to conclude.

So Peirce’s project in a nutshell, as I have described it, is to show that the scientific method has a universal psychological appeal, such that, given certain universal facts about what causes people to enter into a state of doubt, in the long run the scientific method is the only method that will really put doubts to rest. In a way this is even a kind of argument for the scientific method, since it might be taken to imply that everyone accepts certain very basic rules for belief revision and that these basic rules, such as respecting the viewpoints of other people, lead to an acceptance of the scientific method. If that was true then it would mean that no one could consistently deny the scientific method, and in that way we might be forced into accepting it as true.

2: Problems Facing Peirce’s Project

But while Peirce’s project paints an attractive picture of the scientific methods it is, unfortunately, subject to two kinds of criticisms. One is an external criticism, directed at how satisfying an account of the scientific method can be that defends it in terms of facts about our psychology, rather than facts about why it is an objectively superior method. Suppose we encounter a species with a different sort of psychology that leads them to accept some other, radically different, method. Surely it won’t be the case that both methods are equally good and thus that we are free to be led by our psychological inclinations. Rather it seems likely that one method or the other will be better in objective terms, and thus that we would have reason to use that method, no matter how psychologically unacceptable it is to us. Given that, it seems as if it should be at least possible to say in objective terms why, to the best of our knowledge, the scientific method is a good method. On the other hand, while such criticisms might seem important to us they don’t necessarily make Peirce wrong; we might accept his account of the scientific method as presented and simply seek an additional defense of that method from an objective standpoint. But there are internal criticisms of Peirce’s project to deal with as well, which argue that Peirce’s account of the scientific method isn’t psychologically plausible or that it won’t be the case that everyone will be led to accept a single version of the scientific method. If such criticisms can’t be addressed then it would seem that we really would need a defense of the scientific method in objective terms, if only to justify our choice of one particular psychologically acceptable method among many.

The first problem in terms of the psychological plausibility of the account arises in connection with Peirce’s appeal to our inherent tendency to take seriously the beliefs of other people, which he assumes will motivate us to accept a method that appears to lead to agreement in the long run. If we take Peirce at his word here, however, he simply seems to be wrong. Consider religion. Obviously there is no widespread agreement as to what variety of religion is correct, but few people are brought to doubt their religion or the epistemological merits of faith because of that. Indeed religion seems to be an example of the method of authority triumphing over that sentiment. Similarly, we might plausibly suppose that someone who accepted the method of authority might be unfazed by differences in opinion; they might just assume that others are listening to the wrong authority. To solve such problems it would be natural to appeal to something like objectivity; we would assume that people had a natural tendency to seek agreement only with respect to objective matters, the bread and butter of science, and to take agreement as irrelevant with respect to the subjective, including such things as religion, art, and hallucinations. But Peirce’s account seems to leave no room for such a notion. If we were to introduce objectivity it would have to be as another basic part of our psychology, meaning that we would have to assume that people are naturally inclined to take certain things as objective and others as subjective and that the judgment as to which is which may be informed by later development. We would have to grant that judgments about objectivity may vary because in real life people don’t agree, and because we have no other suitable primitive psychological notions to explain those differences with. Since objectivity limits the applicability of the scientific method it is thus possible for there to be some people who take nothing to be objective, or take to be objective things that the scientific method doesn’t lead to agreement about (such as hallucinations) and thus reject it for failing to produce that agreement. This would essentially bring us in a full circle, back to the position we were in before the appeal to a universal tendency to respect each other’s beliefs, where it would seem that the scientific method would be psychologically acceptable only to some. Thus if we are to successfully develop a more nuanced understanding of the tendency to respect each other’s beliefs that reflects our actual dispositions, and if we are to introduce an account of objectivity that doesn’t undermine the entire project, a more subtle approach will be needed.

A second problem for Peirce’s project is his description of reality. As he defines it reality is a heavily metaphysical notion. It is metaphysical in the sense that while reality is described in terms of properties it has, such as an external permanency, the cause of our sensations, etc, nothing is said about what it is in a way that would allow us to identify it. Indeed it seems like such an idea would be most at home within a Kantian picture of the world where reality itself is forever unknowable and where all we really have access to are sensations. And if that isn’t enough to convince you that Peirce’s “reality” is highly metaphysical I would point out that we could easily replace “reality” with “god” without any substantial changes in the scientific method as described; instead we would claim that god is an external permanency which is the source of our sensations and that a belief in god is essential to the scientific worldview, yielding a position reminiscent of Berkeley’s. This by itself should make us suspicious of including such a concept within the scientific method, especially if that concept plays as an important role as Peirce claims. And the aforementioned arguments that Peirce gives for his conception of reality do not solve this problem; if anything they aggravate it since they could be given for any metaphysical notion that we took as important for the scientific method.

Of course Peirce’s fundamental motivation for this conception of reality seems to be the conviction that such a belief is required for us to find the scientific method plausible. But such a claim hardly fits with the picture of doubt and belief that he has constructed. The idea that we need some principles to demonstrate the validity of our method seems strangely Cartesian. Peirce has been advocating a conception of inquiry in which the central idea is that doubts are raised only by genuine evidence against the propositions we hold to be true, not vague metaphysical worries, so that we might be satisfied with beliefs that are simply the best we are able to come up with, even if the possibility for the future revision of those beliefs exists. If we simply apply that idea to our method of inquiry itself we should conclude not that we need something like an appeal to reality to remove our doubts about it, but rather that no doubts need arise, so long as our method doesn’t fail us by producing bad theories. So, even if Peirce is right and we do require some assurance that our method will produce agreement, since other methods, such as the a priori method, have made that promise and failed, it would seem that the assurance would be best provided by actual progress towards agreement. Because we could equally well provide justification for the a priori method by appeal to metaphysical principles, but that wouldn’t make it a viable alternative to the scientific method.

This may imply that any discussion of reality should just be omitted from a discussion of the scientific method. But, like objectivity, it seems hard to reconstruct a version of the scientific method without some position on reality. After all, we test our theories in the scientific method by attempting to compare them to reality; we would not accept or reject a theory because of how well it applied in the context of a dream or a hallucination. Obviously there is some overlap between drawing a distinction between the real and the unreal and objectivity and subjectivity, perhaps we might even explain one in terms of the other. And I maintain that without some account of these distinctions we cannot be said to have given an account of the psychological acceptability of the scientific method, rather we would be endorsing a family of methods, each of which makes different judgments regarding reality and objectivity, and which thus proceed in radically different ways. At the very least it needs to be shown that there is a tendency towards convergence in these distinctions in order to make the claim that the scientific method will lead to agreement more plausible.

Finally, Peirce’s definition of truth is not without its problems either. Although truth is defined much less metaphysically it still feels very much like an empty notion. We may grant that the end products of the scientific method are true, but what does it mean to say that they are true? If truth is defined only as the end of inquiry, that doesn’t make for a very satisfying definition of truth, nor a very useful one. Additionally, truth is not something we withhold our judgment from until some indefinite time in the future, rather it is often of significant importance to us now which of a number of alternatives are true. Indeed such considerations seem very much bound up with the process of inquiry. More specifically, scientists don’t prefer one theory to another just because the scientific method tells them to, they prefer one theory to another because they conclude that one is more likely to be true or closer to the truth, a judgment that itself is based on evidence. Again, as with reality, it appears that truth is a distinction that has a role to play in activity of scientific inquiry, and thus that Peirce is doing a disservice to his description of scientific inquiry by putting truth essentially outside it as a conclusion to inquiry as a whole (just as he put reality outside it by making it a precondition for our acceptance of the scientific method).

Although these problems might seem disconnected from each other I think that they stem from a single source: the fact that any method for inquiry will be bound up with something like a complicated conception of reality that includes ideas about what is real and under what conditions we have access to reality, and that these ideas determine how the method is actually used. But Peirce’s picture of reality is sparse and his psychological theory doesn’t indicate how we might come to agree on these issues; ultimately this creates a disconnect between the method Peirce says that we will find psychologically acceptable and the scientific method.

3: Towards a Clearer Conception of the Scientific Method

To reiterate, there are two kinds of problems facing Peirce’s account. One is the problem of defending the value of the scientific method from an objective standpoint, where we could conclude that the scientific method is good for the people who use it (in terms of survival, perhaps, because it leads to “more accurate” beliefs), not just from a subjective standpoint as Piece does, where all we can say is that we are inclined to believe it or that it is psychologically comfortable. Secondly, we have a number of essentially similar problems, all of which revolve around developing clear and psychologically acceptable descriptions of concepts such as objectivity, reality, and truth so as to explain their role in inquiry within the scientific method and why we draw those distinctions as we do. Such an account would have the benefit of bringing us closer to a description of the scientific method as actually practiced, and it would remove the need for Peirce’s abstract and somewhat problematic definitions. Of course it is impossible to give the “correct” definitions for such concepts, but we can aim to give definitions that reflect distinctions in the actual investigative practice and which are objectively useful to the investigator (i.e. lead to “successful” theories more often).

Even though they are different problems, however, I think that they can be solved with essentially the same approach. Consider then how we might go about arguing that the scientific method is an objectively good method for forming beliefs. That would involve arguing that in a wide range of circumstances and for a wide range of interests that the beliefs that the scientific method produces will be effective. But if we chose to consider humans in a complicated society making such an argument will be incredibly complicated, for obvious reasons. Moreover it might not even be the case that the scientific method is universally preferable: if using any other method besides that of authority is taken as a grievous error by society using the scientific method in anything like its full scope could be downright dangerous. But this simply illustrates that what we are after is a defense of the scientific method modulo such “irrelevant” factors, which also helps with the problem of the excessive complexity of such an analysis. What it would seem that we want then is a discussion of the scientific method in terms of very simple agents, agents who aren’t part of a complex society, and who don’t have preconceptions and biases that may close off certain investigative possibilities. Such an investigation might also solve the other problem facing Peirce’s account, that of providing a more plausible and pragmatic account of certain concepts used in inquiry. At least it may if we approach the problem by building up the scientific method, piece by piece, for our agents by developing accounts of these distinctions simple enough to fit into their psychology and which we can defend as useful for them. Thus we would arrive at a picture of the scientific method in which it as a whole is beneficial for these simple agents and where each of its components is clearly defined and independently productive.

In order to do that we must describe a psychology for our agents, complicated enough that they could develop ideas concerning inquiry and put them to use shaping how their beliefs develop, but simple enough that we avoid the complications associated with human psychology. Furthermore, to simplify my description, I will omit any discussion of how these agents develop cognitive apparatus (i.e. how they learn); all that matters for our purposes is to demonstrate that some kinds of cognitive apparatus are better than others. Obviously then the agents must have some mental apparatus that we can think of as constituting predictions or theories about the world (which I will just call predictions to distinguish them from our more complicated theories, although they are analogous). On the basis of these predictions the agent is led to act, and those actions may be either be to their benefit or disadvantage. We can understand these predictions, in the simplest possible terms, as conditional statements where the antecedent involves a pattern that the agent attempts to match up to the previous inputs it has received and the actions it has taken, and whether the consequent is an event that is predicted to occur subsequently. Obviously there must be substantial complexity in the pattern matching if these predictions are to do any real work, but pattern matching is a well understood problem. Of course choosing to act in accordance with a particular prediction (such as taking actions so that the antecedent of a prediction will match) is itself an action. So to allow the agent to make predictions about its own process of inquiry (meta-predictions which are applied to object predictions) we can further suppose that the agent attempts to apply any relevant predictions when making the choice whether to use a prediction or not. These would be meta-predictions whose patterns match, at least in part, that object prediction itself, and which predict success or failure as a result of choosing to act in accordance with that object prediction. If this meta-prediction predicts failure then the agent will not in fact choose to act in accordance with the object prediction, and if the object prediction is blocked often enough then it will be discarded completely. Thus in this way the agent can be said to have ideas about its process of inquiry, as object predictions that are repeated blocked by meta-predictions from being used can be thought of as being rejected by the agent. And an agent will have objectively better meta-predictions if they lead the agent to use only their best object predictions.

Such agents are, admittedly, quite dissimilar from us. Indeed they don’t have concepts, and even describing them as having beliefs would be a stretch. Because of these differences some allowances must be made when understanding what their meta-predictions imply for our own process of inquiry. Previously I claimed that it would be best to understand how certain conceptual distinctions involved in the scientific method would be useful for such agents, but our agents don’t have any concepts. I take it as natural to simply understand a pattern or part of a pattern involved in a prediction as something that could be a concept for us, given that patterns either match or don’t match for them, while concepts either apply or don’t apply in a given situation for us. If that is too much of a stretch then we are free to embellish the psychological machinery of these agents, assuming that the patterns in their predictions have a bound on upper length, but that they may contain tokens which are substituted for predefined patterns. Then we could consider these tokens to be concepts, and these token/concepts would prove useful by making possible more complicated patterns; it all amounts to the same thing though.

Assume that these agents are in a world that is not completely random, meaning that there is some statistical regularity. And if there is some statistical regularity it will be the case that past performance often usefully serves as a guide to future performance. In other words, like the world we live in. In such a world a simple and very useful meta-prediction would be one that predicts failure for the use of an object prediction if it has failed with enough regularity in the past. And this meta-prediction could be made even better if it was changed to be more in line with the laws of probability, to be more sensitive to sample size, etc. But while such technical improvements are useful they will only get the agent so far; there are a large number of object predictions (we might even assume the agent develops new object predictions at random). And because the agent’s actions are directed by the predictions it has been entertaining it is unlikely that its past experience will have anything to say about a new prediction until it tries it out a number of times. Obviously this is inefficient.

What is needed is more data. And at this point let us further suppose that our agent is not alone in the world, but that there are other agents as well and that they have a language through which they communicate. Although we could tell a story here about how the agents developed a language and learned about each others’ existence I see no need to; for the purposes of this paper we can assume it is just built into them. Given these conditions a further improvement that could be made to the meta-prediction is if it made estimates concerning the probability that a prediction will be successful not just based on the agent’s experiences alone, but on the experiences of other agents as communicated through language as well. If the world works the same for all the agents this modification will have the effect of providing a much larger amount of evidence to base estimations concerning probability off of, which will obviously make those estimates more accurate.

But if the agents are in a realistic world it will not be the case though that all of the reports made by other agents are accurate. There will be some topics in their language that correspond to things that are intrinsically subjective, and which will thus work differently for all the agents. There will also be cases where agents will make inaccurate reports because they are in situations where their perceptions are distorted. And there may even be parts of the language where it is used inconsistently and so the agents might miscommunicate when they use it. In all of these cases it would be better for the agent’s estimations of probability if they ignored those reports. This brings us to our first “concept” which I designate [O], a pattern that matches only those reports which are “reliable”, i.e. those made on topics which are primarily independent of the reporting agent’s psychology and which are made under conditions where their senses are working properly. Since these are the reports that track the actual events in the agents’ experiences that carry over from one to another obviously making probability judgments based only on [O] reports will be an improvement. And since I have omitted any mention of the learning dynamics that is all am required to say, that such a distinction would be beneficial to them. But it seems plausible that the agent’s could develop that distinction via feedback concerning which reports were the basis for correct meta-judgments and which weren’t.

Of course if agents can make inaccurate reports then it is probably also the case that they can have inaccurate experiences or experience things that are purely subjective, such as hallucinations. Let us further suppose that there is some regularity to when such distortions occur or how they present themselves (a real life example would be feelings, which are presented in their own distinct way to us, and which we thus are easily able to recognize as subjective). Furthermore, let us also suppose that these distortions are not themselves worth predicting; they can’t be usefully used to predict anything substantial, nor are they intrinsically important to the agent’s goals. Again, this seems to be how the world we find ourselves in works: although we may hallucinate or misperceive I know of no theories which accurately predict facts important to our survival on the basis of them, nor are they intrinsically relevant to our survival. Thus it would be better for our agent’s estimations of probability if they discarded such experiences. And this constitutes the second concept I am interested in, [R], which matches only those experiences that are deemed “reliable” in the way just discussed. Again, this says nothing about how such a distinction might be developed, but it seems plausible that, like [O] it might be developed through feedback concerning which kinds of experiences and under which circumstances make the basis for good judgments concerning probability. Or, if the agent already has a concept such as [O], it might be developed though a kind of short-cut, whereby the agent applies its standards for [O] reports concerning reliable conditions and topics to its own experiences.

Finally, in any realistic situation simply entertaining a prediction will cost an agent something, both in the time it takes the agent to consider whether to employ it, and because every prediction takes up some cognitive space that the agent could be using to store some other possible prediction. And unfortunately for our agent its current meta-prediction has the potential to lead it to entertain a large number of predictions that simply waste space. Because the agent is making its judgments about those predictions based only on experiences deemed [R] and interactions with other agents deemed [O], any prediction that has an antecedent that requires a non-[R] experience or non-[O] report to match will never be considered to have any evidence for or against it. Of course this is not to imply that the agent will never use that prediction. The meta-prediction will obviously not block object predictions that lack support, otherwise genuinely novel predictions would never have a chance to be tested. Thus the meta-prediction may pass this object prediction, and occasionally the agent may have the appropriate non-[R] experience that is required to trigger its use. And it may even be that this prediction is occasionally successful, although it can’t be regularly successful, since, as previously mentioned, anything that was part of a regularly successful prediction would be deemed [R]. But, even so, such an occurrence is not worth throwing out the cognitive apparatus developed so far. That cognitive apparatus is useful because in when a prediction matches a non-[R] experience and is successful that success is a fluke. And so the agent would be better off getting rid of all of their dead-weight predictions, even if a minority of them are practically effective some of the time. To prevent the meta-prediction from passing these dead weight predictions a third concept must be introduced, [S]. [S] is a concept that matches the theories under examination themselves, rejecting theories that can only match non-[R] experiences because of their structure (either because they match things like feelings or because they match only under conditions where the agent’s experiences are unreliable). Although, unlike [R], this concept cannot be extrapolated from another already developed, it is not hard to imagine a learning-dynamics that could give rise to it, as it could be developed by gradual improvements which match more and more kinds of predictions, with those that go too far losing out because the agent’s competitors have access to a greater number of useful predictions.

After all these improvements are in place the agent’s meta-prediction works something as follows: it evaluates predictions based on evidence, where evidence is a certain class of experiences and reports of experiences, and it only considers those predictions that it knows how to evaluate. To me this looks a lot like a version of the scientific method stripped of any conception of simplicity or historical precedent.

4: From Simple Agents to Peirce and Beyond

Obviously these simple agents, no matter how they develop, will always be somewhat alien. But I think that, as mentioned above, there is something resembling the scientific method in them. They develop better predictions through a process that looks a great deal like experimentation; they take their predictions and attempt to compare them to the available evidence. And ultimately that is what makes the scientific method unique: in it alone are theories evaluated by what appears to be an external standard independent of our psychological inclinations. More importantly, I think that the “concepts” I have described as useful to them can be plausibly seen as having analogues in our own conception of the scientific method. [O] seems to correspond to a something like our notion of objectivity, since the agents effectively use it to distinguish between reports that they can and can’t trust. Likewise, we think of some reports as being objective, and thus worth considering, while others we take to be subjective, and thus best set aside when doing science. Similarly [R] seems to correspond to our “ordinary” (i.e. pre-philosophical) conception of reality, which I take in its most naïve form to distinguish between experiences that we can reliably generalize from, the content of which we take to be real, from those that aren’t properly the basis of such generalizations, such as hallucinations, the contents of which we consider unreal. Similarly, for our simple agents [R] separates the experiences they will use as evidence from those that they won’t. Finally, [S] too seems to track a distinction that is naturally made by scientists, although there is no universally accepted word to describe it. The distinction in question is between those theories that can be tested, and thus which scientists will consider seriously, versus those that can’t, and which will thus be rejected out of hand[1]; a distinction between the substantial or meaningful theories and those lacking in substance or meaningless. As described [S] performs a similar function for our agents since it leads them to reject without further consideration those predictions that they can’t match to [R] experiences (real experiences).

One concept may appear to be lacking for these simple agents, however, is that of truth. Does this imply that truth is an essentially redundant notion, serving no purpose of its own, as some have claimed? On the contrary, I am inclined to take a cue from Peirce here and assert that their meta-prediction as a whole, their standards for inquiry, define what truth is for these agents. The concepts discussed here then do not supplant truth, rather they are partially definitive of it. And thus truth would amount to something like “X is true if and only if is substantial/meaningful and agrees with reality (the content of the real experiences of agents and reliable reports)”. Obviously I can’t “prove” that this constitutes a conception of truth or something like it for the simple agents, but I can show that it has certain properties that we would expect of a conception of truth. First it is something that these agents can never be sure that they have with complete certainty, since there is always more to learn about reality (as contained in future experiences), just as we think that we will never be able to know with complete certainty whether our best scientific theories are true. Secondly, the definition strongly implies that something can be true only if it corresponds with reality, and correspondence is an extremely intuitive way to capture truth in the eyes of many. To see how this is so consider a very simple theory, one that asserts that “all X are Y”. To consider what this theory asserts we must consider what X and Y mean. Given that the truth or falsity of this statement is determined by whether the agent takes that assertion to fit with its experiences, X and Y can only mean “things giving rise to experiences that the agent would deem ‘X’ to match to when the agent is exposed to them in circumstances where the agent would judge them to be [R]” and “things giving rise to experiences that the agent would deem ‘X’ to match …”. And if the statement is actually true it must be the case that there will never be an [R] experience in which ‘X’ will match without ‘Y’ matching. Which seems to imply that it can be true only if it expresses the way things “actually are”, since, in this case, to be true it must be the case that the things X and Y refer to really are co-extensive as the theory asserts, otherwise [R] experiences in which they come apart would be possible, and hence it wouldn’t be true.

Since the method of inquiry that is best for these agents seems to reflect significant and practically important distinctions within the actual scientific method it seems possible then that we might be able to use them to address the problems facing Peirce’s account of the scientific method. The first problem I described as facing Peirce’s project was that it didn’t give a plausible account of our tendency to take the beliefs of other people seriously, which Peirce leaned on heavily, because of our apparent willingness to ignore those beliefs in many cases. The [O] distinction, or objectivity, our corresponding concept, can be easily added to this account to provide an explanation of that tendency that agrees with our observations. Instead of supposing that people are disposed to take each other’s beliefs seriously in general we can suppose that they are disposed to take each other’s beliefs seriously in objective matters. Indeed this is exactly how [O] functions for our simple agents, it tells them which reports to ignore and which reports to take seriously when deciding the validity of their predictions. Similarly [R] or a conception of reality significantly similar to it, could also serve to replace Peirce’s troublesome metaphysical definition of reality. And, again, it can also explain why some experiences are taken to be relevant in the scientific method and other aren’t systematically, eliminating the possibility that the scientific method won’t produce agreement because inquirers won’t agree on what constitutes evidence. Finally, we can improve Peirce’s account of truth, not by denying that truth is the end of inquiry, but by making it better reflect the standards of scientific inquiry, such that a true statement is one that reflects reality, and so on. Understood in this way a conception of truth becomes a general guide to inquiry, such that thinking about which possibility is true under such a conception reveals which one the scientific method would endorse, or what we would have to do to get the scientific method to endorse one of them. But while such concepts are easily added to Peirce’s description of the scientific method, but simply tacking them on is not by itself an end to the difficulties

It may be true that adding such distinctions to Peirce’s account would make it better reflect the reality of scientific inquiry. However, by introducing such concepts that account would also become more complicated as well. Peirce seems to aim to show that the scientific method is the one method that is completely psychologically acceptable. But when we add these additional distinctions it would appear that this also opens up the possibility that people will draw these distinctions differently, yielding different and incompatible versions of the scientific method that they will find acceptable. One way out of this dilemma would be to simply claim that fundamentally we all draw the same conceptual distinctions, at least when it comes to matters of “rationality”. Such a claim doesn’t seem likely to be true, however, and it is far too “rationalist” to ever be accepted by someone like Peirce. So if we accept that people may draw these distinctions differently what we need is some psychological force that will eventually lead them to converge, and not just as a matter of convention but to some psychologically unique point. I think that frustration might do the job. It is psychologically plausible to suppose that people become frustrated when their theories lead them to undertake unsuccessful actions or when their theories are insufficient to predict the results of their actions. Now, as I have just discussed with respect to the simple agents, there is an optimal way to draw these distinctions, one that leads to the largest number of successful theories. If the distinctions are drawn differently then people will either end up having more false theories, leading to unsuccessful actions, or too few theories, preventing them from using them as a guide for successful action. In either case people will be frustrated, and that frustration, I suppose, will lead them to draw those distinctions so as to minimize that frustration. Thus ultimately those distinctions will be made in the same way by everyone, since which distinctions are objectively best is a fact that is independent from individual psychology.

But while that is a psychologically plausible account it does lean on the claim that the scientific method, specifically a scientific method in which there are particular ideas about objectivity, reality, and what it takes to be a substantive theory, is objectively the best at yielding accurate theories. And that is a claim that Peirce would not be willing to make, since it goes beyond his aim of showing that the scientific method is psychologically inevitable. Unfortunately for Peirce I don’t see a way out of appealing to the objective superiority of the scientific method; without such appeal there simply seems too much room for variation in what might be taken to be the scientific method to claim that the scientific method, as we know it, is psychologically necessitated. I, however, would not back away from claiming that the scientific method is objectively best because I think that that claim itself is something we can test, and thus something which we can conclude with confidence. That conclusion is, however, dependant on certain facts:
1. The world displays regularities
2. There are multiple agents that have the ability to communicate\
3. The world works the same for all agents
4. Miscommunication and misperception is possible
5. Cases of misperception are not intrinsically important
6. Entertaining a prediction comes at some cost to the agent
7. The world is complicated enough that being better able to predict facts about it in general trumps simpler strategies
Each of these facts seems to be true, though. And, more importantly it would seem that if any of those facts aren’t true that the scientific method would eventually lead us to be aware of it (at the very least through the general failure of that method). Now some might call such reasoning circular, and in a sense it is; I cannot demonstrate that we aren’t suffering from massive cognitive defects and deceptions that make knowing anything impossible. But I still think it provides a strong kind of justification for the scientific method, because to deny that it is an effective method would seem to involve denying one of the conditions on that list, which few would. Furthermore, we not restricted to reasoning in this way about the scientific method alone; we could also ask what would have to be the case to make the method of authority or the a priori method work, and we can observe that those facts seem not to hold. Finally, given that the scientific method will only be successful if those facts hold this account is clearly the best we can do in terms of a defense of the scientific method, or any method, because clearly which method is best will depend on facts about the situation that we can’t know a priori, and so no matter what method we settle on to defend it we must appeal to facts in a way that will seem circular. Thus this defense of the scientific method really is a good one; it is the best we can possibly do and constitutes some improvement over taking the scientific method on faith or because of its psychological appeal.


[1] Or perhaps should be. String theory in its current form seems un-testable, and in its current form seems like bad science to some, since it has yet to yield any substantial results, and since the only argument in its favor is its mathematical simplicity.

December 20, 2007

The Arbitrary And The Rational

Filed under: Epistemology — Peter @ 12:00 am

The earliest explanations of the world, and thus the earliest “science”, were probably myths, which explained the world by positing supernatural beings, such as spirits, who made it behave as it did. The next small step after that towards what we recognize as science was taken by the Greeks, who tried to explain the world in a way that was less an amalgamation of different stories and more of a theory created explicitly for the purpose of understanding. Of course their theories leaned heavily on projecting agency onto the world as well, but they were also more systematic in the way they attempted to explain everything by appealing only to a small number of general principles (at least in the beginning). By their own standards I consider those explanations to be fairly successful. Obviously they weren’t able to make predictions with them, but they were able to shoehorn the observed phenomena into being described by some story about the elements or the Platonic solids. And for them that was sufficient, because it allowed them to at least posit some reason for events to happen as they did, which makes the world an understandable and more regular place.

And for a long time Greek science was considered the pinnacle of what we could know about the world. Even when new phenomena were discovered people would simply find a way to fit them into the Greek model. Eventually though this period of relative scientific inactivity came to an end, people began to challenge the correctness of Aristotle, and new theories were put forward to explain the world. Honestly I don’t know where the impetus to shake off the old science came from; perhaps it was the development of new mathematics which made precise scientific theories possible where previously they had been inconceivable, or perhaps it was as a byproduct of the beginnings of the industrial revolution, where engineering advances required more precise predictions in order to determine whether a machine that was under consideration would perform as expected. Given our modern attitudes such a scientific revolution wouldn’t be hard to motivate; all we would have to do would be to point out that the new theories predicted more and more accurately than the Aristotelian ones did. But that is an attitude that came later, as a result of that scientific revolution. To argue against Aristotelian science the new scientists needed something else, something that would convince their learned colleagues who were not particularly inclined to care about practicality. Thus they argued that Aristotelian science was arbitrary, and so in a sense irrational. And they presented their new theories as deduced from rationally indubitable principles, and thus as necessary facts about the world.

Some of the early scientists who argued for the superiority of the new science in that way were part of the rationalist movement. The rationalists thought that everything could be understood by the application of pure reason, an attitude that seemed plausible at the time because many believed that god guaranteed that fact. Either because god was a perfectly rational creator, and thus that everything must thus have some reason behind it, and so opening up the possibility that the reason might be uncovered by the human intellect, or because god charitably designed humanity with that capability. Descartes is a good example of such a rationalist. Descartes was “originally” a scientist rather than a philosopher, although the distinction didn’t mean much at the time; he was focused primarily on devising laws for things such as optics and motion. But Descartes was not happy to just devise laws that seemed accurate; he referred to such laws as hypothetical reasoning, meaning that they might be the case but that they weren’t confirmed. Descartes wanted his system to follow from purely rational principles, and not just to be the best fit to the evidence. And because of that he was led to more philosophical endeavors, which is unavoidable if you want your physics to follow from some kind of metaphysics. There, Descartes hoped, he had found principles, such that bodies are defined by extension, from which everything else would follow and which were completely rational.

But did Descartes and the other rationalists really accomplish what they hoped, setting aside for the moment the errors in their systems? Obviously they thought that the principles that they had developed as a starting point were “rational”, but from a modern perspective they seem as arbitrary as the principles of Aristotelian physics, just in a different way. Consider the idea that bodies are defined by extension. If that principle is rational then how can we consistently entertain the idea of point particles, which have location but no extension? And yet we do entertain them, quite productively. What we have done then is simply move the arbitrariness around. Aristotelian physics was claimed to be arbitrary because the rules the world was supposed to operate by did not themselves fit into any larger scheme. But the systems of the rationalists were equally arbitrary because they principles they picked as a starting point were not themselves guaranteed, as was claimed, but were simply highly intuitive to the rationalists. But, despite that, surely they were correct in rejecting Aristotelian physics, and investigations in a “rationalist” vein continue today; although people aren’t looking for more “rational” principles scientists are always looking for more fundamental laws which will explain the laws that we have already established (and more fundamental laws for those, and so on). How then can the rationalist scientific project, or some successor to it, be so successful given that we know it is flawed?

Let us return to the beginning then and ask whether Aristotelian science was really flawed because of how arbitrary it was. Certainly that seems to be its fault, especially if is brought to our attention that people would explain the fact that certain substances put people to sleep by saying that they had the property of putting people to sleep (actually they used a Latin word to name that property, but it amounts to the same thing). And that is no explanation at all. But, on the other hand, Aristotelian science had its virtues as well. Its primary virtue is that it proceeded to its laws (at least initially) by observation alone, just as modern scientists are supposed to. An Aristotelian scientist would go out into the world, observe what occurred, and then label and systematize it. And that is what modern scientists do too. Sure we are able to explain why certain substances put people to sleep now, but ultimately, at the level of the most basic particles, we still attribute properties to them, such as the property of the disposition to engage in certain interactions with other particles, without any “reason” for doing so. Rather we simply point out that those properties explain our observations extremely well, and are hence justified in that way. And that is what the Aristotelian scientist was doing when he described substances as having the power of putting people to sleep, just at a much coarser level.

What Aristotelian science was really suffering from was dogma. It simply wasn’t open to revision of any kind with respect to the main body of the theory, and thus scientists were restricted to adding more and more special cases, and unexplained properties, in order to make it work. Thus what the rationalists did right was rejecting the existing dogma and trying something new. But the rationalists were not against dogma. Indeed if the rationalist project had gained a greater foothold in science they would just have created their own dogmas. Instead of unquestionable Aristotelian principles there would have been unquestionable rational principles, and scientists would not have to hammer their explanations into being compatible with and derivable from those principles. Indeed I can see a foreshadowing of that possible outcome in Descartes own work. Originally he “deduced” that the Earth must move. However he realized that making that claim would be bad for his career and, showing a somewhat typical lack of intellectual character, he fiddled with his deductions until he could conclude that the Earth didn’t move. Not only is that absurd for a system which is supposed to be deductive (you can’t fiddle with a deduction, if you revise it you must do so by rejecting some of the principles you started with), but it illustrates how future scientists would have fiddled with their deductions until they were compatible with the rational principles. Fortunately rationalism didn’t get a grip on the scientific community, probably because new theories soon came along, such as Newton’s, which were more successful and which left the idea that they were derived from purely rational principles somewhat by the wayside.

The fact that modern scientific investigations always seem to look for the underpinnings of what are currently held to be basic laws we can attribute not to an aftereffect of the rationalist program, specifically as a kind of looking for more rational and simpler principles, but rather to an ongoing attempt at rejecting dogma. Specifically scientists look at the current laws and attempt to either show them to be wrong, or to show them to be a manifestation of other laws, and thus rejecting them as primitive. And so if we attack what seems arbitrary it is not because being arbitrary is essentially bad or irrational, but because it might be a dogma which we might productively challenge by replacing it with some more complicated and more accurate description of the world.

December 17, 2007

The Value Of Epistemology

Filed under: Epistemology — Peter @ 12:00 am

It is important to occasionally ask ourselves what the value of the philosophy we do is, not because we need to justify ourselves to someone making a budget, but to prevent ourselves from wandering off into realms of abstraction that are essentially meaningless since they don’t embody any assertions about the world. And, even if we aren’t in danger of doing that, bringing our attention back to the particular problems that we are trying to solve is often worthwhile by itself, as it is also possible to get lost in constructing a theory and in doing so lose sight of what the theory was supposed to do. In that vein let us ask what the value of epistemology is. The answer initially seems obvious: epistemology, done properly, should reveal how we can acquire knowledge. And thus we could use such a theory to eliminate beliefs we only mistakenly believe to be knowledge, as well as help shape our future investigations to better lead us to further knowledge. Hopefully the value of knowledge itself is obvious: knowledge is more likely to be true than beliefs developed in other ways, and true beliefs are obviously valuable instrumentally.

But is epistemology really valuable in this way? To say that it will lead us to possess more knowledge implies that how we presently come to such beliefs is deficient. However I see no evidence of such a deficiency, at least not where it matters most. The sciences seem perfectly capable to coming to true conclusions on their own without additional help. Indeed the only thing that seems like it might help science is simply more evidence, and an epistemological theory is not going to supply that. At best it would seem that such a theory could simply say why science is so successful, but that explanation is not, by itself, valuable. Of course not all disciplines are as successful as the sciences. Indeed philosophy itself seems like a field that might greatly benefit from a theory of knowledge being applied to it. However that gets us into deep waters concerning whether philosophy really does need improvement and about what the value of philosophy is in general, which are probably better not entered into. Thus, if an epistemological theory is supposed to be valuable, it must play some role in improving the way people reason in general. Often we make mistakenly conclude that we know things when we in fact don’t. And such mistakes rarely exist in a vacuum, there is a disturbing tendency for unjustified beliefs to be passed from person to person as knowledge. In this context it does seem that there is work for an epistemological theory to do, that if we could inform people about how knowledge really works that they would make fewer such mistakes.

Of course this assumes that we are capable of conveying our epistemological results to the public at large and that they are capable of actually applying such theories. But let us just grant that both are possible, not simply for reasons of charity, but because I think they really are possible, assuming that people are introduced to such theories early in life, and not later as positions in academic philosophy. I guess then that we are assuming that these people will apply those epistemological theories to their irrational beliefs and thus discard them. But why would they apply that theory to those beliefs specifically? In thinking of them as true they are on the same level with every other belief held as true, including the belief that the Earth is round and that the sun will rise tomorrow. Surely we don’t expect them to turn that theory on every belief that they possess. Checking the reliability of a belief will certainly take some time, and our lives are filled with beliefs thought to be true. Just by moving around an environment we develop a number of beliefs about the placement of objects, and that objects exist where we cannot currently see them, and so on. Validating each of these beliefs as they occurred to us would make life basically unlivable. Well, maybe if they have mastered the epistemological theory at an early enough age they can learn to train how they form beliefs, such that they only form beliefs that count as knowledge. And thus they will have no need to double check every belief they form about their environment because they will have already learned that all such beliefs formed in this way are trustworthy in the absence of any contradictory evidence. This seems more plausible, but there are still problems. Because, as I mentioned earlier, many beliefs falsely taken to be knowledge are the result of being told by someone else that they are the case. This would seem to imply that in order not to endorse as knowledge irrational beliefs a person would have to check every fact that was told to them by some other source. But many facts come to us from such sources. Indeed I couldn’t even imagine learning any complex subject without placing some untested trust in others, simply because not every part of a large body of work can be independently confirmed by us; at best we can spot check it. And such restraint against believing the claims of other people could also lead to a kind of practical paralysis, where they would be unable to function even in situations of ordinary communication (“your shoes are by the door”) because they would be constantly checking those facts against background information.

It would appear then that for an epistemological theory to be useful people would have to apply it just to their irrational beliefs. And obviously that isn’t possible, because if we knew which of our beliefs were irrational we wouldn’t take them seriously, and thus there would be no need for an epistemological theory in the first place. Naturally this problem isn’t restricted to trying to apply epistemological theories but extends to belief revision in general. Given that we may have some false beliefs that we take to be true how should we hunt them down and revise them? Descartes’ radical solution was to try to toss out all of our beliefs and then start over from scratch. Obviously such a project could never work. Not only does Descartes’ surreptitiously sneak in a number of principles that might themselves seem in need of doubting to get anywhere, but eventually he falls back on the idea that god is a nice person who guarantees that anything we feel sufficiently certain of is true. Thus it might seem as if we would just have to accept the fact that some false beliefs may hide among our true beliefs and that we wouldn’t be able to free ourselves from them.

At least that would be the situation we would find ourselves in if we restricted ourselves to trying to identify our false beliefs by looking for signs of their falsehood directly. However, we might able to productively hunt them down by finding some other characteristics of theirs that usually accompanies them. Let’s consider then the nature of persistent false beliefs. If a belief is false we will occasionally come across evidence that points to it being false; that is entailed by the nature of a false belief, it asserts something about the world which is not the case, and given that we live in the world there is the possibility that we will come across something that contradicts it. If a false belief persists in light of such evidence it must be because, regarding it, our natural ability to reason and assess evidence, which we do all possess, is somehow being prevented from working properly. Fortunately we don’t have to grope around in the dark for something that can fulfill that role; psychologists have already discovered that the existence of an emotional involvement with a belief, either desire for it to be true or fear that it might be the case, prevents proper evaluation of the evidence. Thus if we want to find our false beliefs we can restrict our search to those that we have strong feelings about. Although we might not have conscious access to when we are being irrational surely we do have conscious access to our desires. And then we could apply our epistemological theory to this restricted subset and determine which of those beliefs really are irrational (because the fact that we have strong feelings about the content of those beliefs doesn’t necessarily mean that we are in error all the time). Since this is probably a relatively small number of beliefs (compared to the total number of our beliefs) applying the theory to each of them doesn’t seem an impossible task.

And so we are back to our earlier conclusions about the value of epistemology, namely that it can be useful to us in improving our ability to reason in ordinary circumstances, as long as it is accompanied by a heuristic we can use to guide its application. Although we are normally relatively rational, in a basically unconscious way, that capability can be interfered with in certain situations, and there we must lean on an explicit theory of knowledge, since obviously we can’t trust our normal reasoning strategies given that interference, and since they are unconscious we can’t just reflect on them to determine what has gone wrong with them.

December 8, 2007


Filed under: Epistemology — Peter @ 12:00 am

Obviously solving some problems takes longer than others, for example you can’t put a list of 10 items in order as fast as you can put a list of 2 items in order, simply because you have to look at more items in first case to put it in order. But problems can take different lengths of time for reasons beside the fact that they are dealing with different sized inputs. Some problems might be able to be solved within a time strictly proportional to the size the input, while others might take time proportional to the square or the cube of the size of the input. This kind of difference is called a difference in the computational complexity of the problem. Computational complexity covers a great deal of territory, but one famous unsolved problem within it is whether a certain class of hard to solve problems, designated NP, can be reduced to faster to solve problems, designated by P. NP problems are characterized by the fact that the fastest known solutions to them take time proportional to an exponential function, say 2n. Often this comes about because there are an exponential number of possible solutions, and, while a solution can be checked for correctness relatively quickly, there is no easy way to narrow in on the correct solution. Thus even the best algorithms for solving the problem essentially reduce to trying one solution after another until a correct answer is found or all the possibilities are exhausted. If we could reduce NP problems to P problems that it would entail that we would have a way to solve these problems in polynomial time (time proportional to some polynomial function of the input), meaning that we aren’t going through the possible solutions one after another in sequence.

But why should we care whether P=NP? As I was working on this piece I realized that I had gotten caught up in the problem, without really paying any attention to the significance of the problem itself. One reason we might care is because there are places where we rely on the fact that certain problems are hard to solve. For example, there are many NP problems where we can quickly generate a solution and then, on the basis of that solution, produce a hard problem that it is the unique answer to. We could then turn that problem into a lock and the answer into the key, and on that basis have confidence in the security of our lock because we know that coming up with that answer from scratch is NP, and thus hard enough to make breaking in infeasible. I have also heard that designing a protein to perform a specific function is NP because of certain difficulties determining how they will fold, although I don’t know enough about that particular field to say for sure. And NP problems pop up every so often in games and puzzles, with the fact that they are NP forcing players to use strategy, and preventing them from simply brute-forcing their way to victory. But, while I can give examples of problems that are NP, it is harder to instill the conviction that proving P=NP or the reverse, that P≠NP matters, except when it comes to encryption, which is the practical implementation of my key-lock example. While the problem seems interesting from an academic point of view there seem few immediate consequences. Sure if P=NP then many modern encryption schemes would be breakable, but that doesn’t mean we couldn’t replace them with something better. We could, for example, use quantum entanglement to one-up encryption by preventing eavesdroppers on our communications completely (because, given that they would collapse the wave-function, we could in principle detect when someone else was listening in). If that makes encryption impractical for the common uses we put it to, such as protecting our transactions on the internet, we might simply come up with other schemes to safeguard ourselves, such as personally paying on receipt of goods. On the other hand, if P≠NP then some encryption schemes may be safe, but little else of interest follows. Which means that the question is purely an academic one, while mathematicians may have great fun writing papers about it doesn’t matter in any practical sense. But that is true for many mathematical questions, so perhaps it shouldn’t surprise us. In any case my goal here isn’t really to say anything significant about whether P=NP, that was just a misleading title to grab readers.

Anyways, let us return for a moment to whether P=NP. If we can provide a solution to some generic NP problem that comes to a solution in polynomial time then I will take it that we will have shown P=NP. One flippant way to demonstrate that is simply to gesture at Moore’s law, which states that computing power doubles every year. If Moore’s law really was a law then obviously the speed at which we can compute would rise exponentially, which means that even an NP problem that we attempted to solve just by checking each answer would finish in polynomial time, simply because increasing computer speed would counteract our initial expectations about how long it would take. However, this isn’t really a reduction of NP to P for two reasons. First there are limits to how fast computers can get, and so Moore’s law can’t continue indefinitely. And, secondly, P algorithms would also experience an exponential speed-up, and so we might still wonder if we could make NP problems as fast as the P ones. A serious possibility regarding how NP problems might be reduced to P problems involves exploiting the quantum nature of the universe to check possible solutions simultaneously. In practical terms it might work as follows: the computer would start by diving the possible solutions into two equal sized groups (not by enumerating them of course, but by making a decision about how to generate the possible solutions, such that we would only generate one half or the other). The computer would then consult a random quantum event that had an equal chance of resulting in either of two outcomes. Depending on the result the computer would ignore one half of the solutions, and would proceed to repeat that process until it was considering only a single possible solution, which it would then check to see whether it was valid. This might seem like it would result in the computer checking only a single possible solution determined at random, but, according to quantum physics, when the computer consults the random quantum event to decide which half of the solutions to discard it actually enters a superposition of both possibilities, where in one part of the superposition it has discarded the first half and in the other it has discarded the second half. And as more and more quantum events are consulted the superposition becomes more and more complicated, until at the end the superposition is made up of the computer checking every possible solution. Because we reach this state by dividing the possible solutions over and over again this means that even if the number of solutions is exponential we will develop a complicated superposition which checks all of them in only polynomial time, thus reducing NP to P. But, while this sounds great, there is a small problem. While we can make the computer enter this complicated superposition it isn’t obvious how to get the answer out. In fact it is an open question whether getting the answer out is even possible. But, on the other hand, the answer is there in the superposition, whether we can see it or not. I’ll leave whether this counts as a reduction of NP to P up to the reader.

So, while the quantum solution was close to what we wanted, it wasn’t quite there because we couldn’t get the answer. So allow me to detail a second way to solve NP problems that does yield an answer in polynomial time, this time exploiting biological facts rather than quantum ones. This time our computer will be an artificially constructed bacteria with a very complicated DNA. Every time the bacteria divides it also divides the problem space (we assume individual bacteria contain structures that record this information), just as our computer above did when consulting its quantum event. And, when there is only one solution left if that solution doesn’t work the bacteria dies, otherwise it keeps dividing, but this time making perfect copies of itself. Leave a properly designed bacteria with enough food and space and it will solve an NP problem in polynomial time. This is because bacteria divide at a relatively constant rate, and thus, as with our quantum computer, manage to cover all of the possibilities because the number of bacteria increases exponentially, just as the complexity of the superposition increased exponentially, with each bacteria dealing with different possible solutions. And, unlike the quantum computer, we will be able to extract our solution because the wrong answers will all die, while the correct answers will continue to multiply. And thus, after a sufficient length of time, if we reach in and extract a bacterium it will encode a correct answer to the problem. (And, obviously, if there are no bacteria it that means that there were no correct answers.) Obviously the space and energy we need increase exponentially with the size of the problem, although we are free to trade space for time (we can make the bacteria take longer to come to a solution in order to have them take up less space), and we might be able to reduce the space requirement significantly in other ways as well, such as with heuristics that allow us to kill off early entire “families” or bacterial solutions. But these things are besides the point when it comes to computational complexity, all that matters is how many steps it takes to get the answer from scratch, and it is obvious that our specially designed bacteria reach the solution in polynomial time. And thus for bacteria computers P=NP.

Obviously these bacterial computers are extremely impractical, but, as with everything else in computational complexity, we consider only ideal situations, and in an ideal situation we can assume that the bacteria don’t make copying errors and have unlimited food and space. What the example of the bacteria computer and the quantum computer really show is not that P=NP, but that how complex solving a problem is depends on the primitive operations we have available to us. And when it comes to real-life computation the primitive operations are determined by the physical nature of the universe, because we can easily imagine universes in which we could extract the answer from our quantum computer, or where our bacteria computer is practical (for example, one in which matter is continuous and the bacteria also halve their size every generation). What this indicates is simply that it is somewhat foolish to try to axiomatically model computational complexity. Sure we can accurately model computational complexity for fixed kinds of machines using an axiom system, but we are simply incapable of determining a priori what the primitive computational operations are. Which just gives one more example of a fact that I established previously, that when it comes to questions regarding reality, such as “can we compute the answer to NP problems in polynomial time?”, answers to them ultimately rest on assumptions about the world which must be supported by observation and experimentation; pure mathematics can tell us nothing by itself.

December 2, 2007

When Models Collide

Filed under: Epistemology — Peter @ 12:00 am

Recently I have been thinking about knowledge in general with a focus on the models that are a key part of the way we understand the world. A model can be thought of as a logical construct, containing objects and properties structured by certain laws, which we put in a correspondence with the world in an attempt to explain it. I won’t go as far as to say that every theory is just some model or other. However, I will assert every theory contains some model, because it is only through a model of some kind that we can conceptualize the world and see patterns in it. But these models are not equally important in every theory. Some theories, such as many scientific ones, aim primarily to understand the structure of the world, and obviously for them the model is the primary focus, because all that matters for that purpose is make the model as perfect as possible. But other theories aim more to offer practical advice, and although some model is required to do so it’s correctness is somewhat tangential to the central issues. As long as the model is roughly correct that is sufficient, because perfect precision simply isn’t needed in these cases. I wouldn’t venture to guess which of these two kinds of theories is more important, however people usually place more emphasis on the first, probably because understanding the structure of reality seems like a more noble goal.

And when it comes to the first kind of theories there is a sharp divide between two general ways of constructing those models. On one hand we have the mechanical models. Mechanical models attempt to model the world using only a few simple kinds of objects and laws. So described it seems like an impossible task, given the amount of complexity we encounter it is hard to imagine that such a simple model could explain it all. And yet mechanical models have done surprisingly well at doing just that. On the other hand we have intuitive models. Instead of trying to model the world using an essentially arbitrary set of simple objects intuitive models attempt to divide up the world along the lines we are naturally inclined to do so, and to explain it with that set of objects and properties. This might seem like a better approach at first glance, because it is natural to suppose that the way we conceptualize the world is generally well-suited to describing it. And yet intuitive models have never met with much success when it comes to actually making predictions beyond the limited domains they were developed from. An example of such an intuitive model would be one that attempted to capture language by supposing that there exist objects corresponding to different meaning, and that we somehow connect to these objects when using language, thus explaining how we can communicate in a relatively unambiguous way. Obviously this model runs into problems when it comes to making specific claims; while we could interpret many cases of language use in terms of this model it is hard to make the model say anything on its own.

If we propose two radically different ways of modeling the world it seems unavoidable, to me, that they will come into conflict with each other; they both aim to explain the entire world, and will thus end up both voicing different views on the same phenomena. Of course if we had the right frame of mind there is nothing that stops us from accepting both models; nothing in either model will explicitly reject the other, and I guess it isn’t impossible for someone to believe in two different explanations for the same thing. But we aren’t devising these models simply because of curiosity, we want to know how the world works. And thus we want to know which model is best, which means we want to know which one lines up best with reality. Of course it is still possible that there two kinds of models could manage to tie, they could make the same predictions, or they might both run into irresolvable problems that only the other can solve. But that’s probably not the situation we will find ourselves in, it is much more likely that one of these ways of modeling the world will turn out to be better than the other. And we will also find ourselves in the position of having to pick one or the other when it comes to making bold claims, claims about what something really is. Recently this kind of situation has arisen with respect to the nature of the mind, with the mechanical and intuitive models both making claims about what the mind really is. Clearly the mind can’t be two different things, and so we must pick one model as superior to the other.

Obviously when we are faced with two different models we should always pick the stronger model, which makes more precise predictions, and the more accurate one, which turns out to be correct more often in its predictions. At first glance it seems that the mechanical models obviously have the upper hand on intuitive models when it comes to both these criteria. Modern mechanical models claim to capture literally everything, and they make very precise numerical predictions about those things that usually turn out to be correct. But, it may be objected, when it comes to even moderately complex phenomena it stops being practically possible to carry out the calculations the mechanical model would require of us. Thus intuitive models may actually be superior when it comes to large-scale phenomena because they actually make claims. However, we have strong theoreticals reasons to believe that we could, in principle, carry out the calculations required of us for large scale phenomena, it’s just that we lack the practical resources to do so. On the other hand it isn’t practical reasons that keep the intuitive models from making stronger or more accurate predictions, but rather their inherent limitations. Thus we have reason to believe that the mechanical models are superior to the intuitive models, in principle, even if, on practical terms, it may be wise to stick to the intuitive models in many situations.

Is it possible to save our intuitive models? In theory it might be possible to tighten them up, both making them stronger and more accurate while still retaining their intuitive character. Admittedly I don’t see how this can be done, but my lack of imagination doesn’t make it impossible. If it could be done it would be essentially a reverse of what happened when mechanical models were first invented, which were initially the ones that were weaker and less precise, but which were eventually improved to be superior. Another way to save our intuitive models would be to give up on the idea that ultimately they reflect the world. Instead we can understand them as rough approximations of it, with the objects and properties in our intuitive models standing for something in the mechanical model that really captures the world. To save our intuitive models in this way means that they cease to be able to answer our questions about the structure of the world. But, on the up side, by making these adjustments we can continue to use them without foolishly acting as if they were the superior models. And thus it is the alternative I prefer, because failing to embrace one of these options makes any use of the intuitive models simply absurd, either because it would essentially express a contradiction within what we believe or would represent an intentional blindness to the facts.

Naturally such a discussion raises some questions about philosophy. Does philosophy make mistakes by sticking to intuitive models when the mechanical models are obviously superior? The answer depends on what philosophy we are considering. Certainly some philosophy makes this mistake, including a lot of metaphysics. But, with the right intent even metaphysics can be done without running into these problems. As mentioned above all we have to do is accept that in some way our model must really be an oblique way of talking about some mechanical model. Obviously this frustrates projects that set our explicitly to say what the nature of the world really is. But a great deal of metaphysics can be interpreted productively as a search for finding practical and mostly accurate ways of conceptualizing phenomena so as to avoid certain errors we tend to make while thinking about them. And outside of this most philosophy can be seen as focused more on giving advice than on the model itself, making them the second kind of theories mentioned in the beginning, which are relatively independent of models. Thus philosophy is, for the most part, safe from the problem of illegitimately embracing intuitive models in any meaningful way, whatever other problems it may actually suffer from.

Next Page »

Create a free website or blog at WordPress.com.