On Philosophy

April 6, 2009

A Reasonable Form of Dualism

Filed under: Mind,Ontology — Peter @ 12:15 pm

I have never been fond of dualism. I have confidence in the ability of science to explain the world, and so when I was first exposed to the mind-body problem it seemed plausible to me that science could explain consciousness. And this means that in some way consciousness and the mind must reduce to or be explainable in terms of the physical; in other words, that materialism is essentially correct and dualism essentially mistaken. From that perspective dualism seems unscientific or anti-scientific; it takes one of the phenomena we find in the world and says that it is off limits to science, that science will never be able to explain it. But does dualism have to take that form? Can’t we separate the ideas in dualism that make it attractive from this anti-scientific position? My goals in this paper are twofold: first to illustrate how I see the common arguments for dualism, and thus the common forms of dualism, as lacking; and secondly to describe a form of dualism that manages to avoid those problems.

A: Arguments From Ignorance

The major arguments for dualism are, at their core, arguments from ignorance. An argument from ignorance is one that proceeds from the fact that we don’t know how to do or explain something to the conclusion that it can’t be done or can’t be explained. This is what gives many forms of dualism their anti-scientific flavor; from the fact that science hasn’t yet explained that mind and the mind body connection it is concluded that science can’t explain the mind and the mind-body connection. Such arguments have no merit; our ignorance reveals nothing about the world, only our lack of knowledge about it. To reasonably argue that something can’t be done or can’t be explained by science requires some understanding of the things involved, and to show that this understanding rules out the proposal. This is, of course, not how dualism is argued for. The dualist does not come to the table with a fully developed and well supported theory of the mind and the mind-body connection which precludes a scientific theory.

Consider, for starters, the argument for dualism from the existence of the explanatory gap. The explanatory gap, in brief, is our current inability to explain the phenomenal character of consciousness – qualia, as some call it – in non-mental terms. And from this gap in our knowledge some conclude that there must be something non-physical involved that such explanations simply can’t capture. This is obviously a fallacious argument. The fact that we don’t yet know how to capture the mental in physical terms doesn’t say anything about whether the mental can – or can’t – be explained in such terms. There are many things we can’t yet explain, such as how quantum mechanics fits with general relativity (another explanatory gap that has been with us for quite some time). It would be absurd to leap to the conclusion that we can’t explain something every time we encounter difficulty in doing so.

Arguments for dualism are rarely put in that form, to accuse most dualists of using the argument above as I have described it would be uncharitable. But certain arguments that have made it into print are really just disguised versions of it. There is a class of arguments for dualism that attempt to refute the possibility of an explanation of the mind in physical terms by asking us to imagine such an explanation at work. Imagine someone without the ability to see colors, or without the ability to sense objects through sonar. No matter how much they study the mind of someone with such sensations they will never know what it is like to have those sensations. Thus we are asked to conclude that such explanations will never in principle capture the phenomenal character of consciousness. But how do we know that they won’t end up knowing what those qualia are like through such an explanation? Obviously we couldn’t, but we don’t know yet how to explain the consciousness in physical terms. Since we don’t know what such an explanation would look like we can’t know what knowledge it will or will not give us. Thus the argument is asking us to conclude, on the basis of our inability to imagine how a scientific explanation of the mind could give us knowledge of what various sensations are like, that it can’t possibly provide such knowledge. In other words, it is an argument from ignorance.

Dualism is also argued for on occasion by claiming that consciousness has some special property, such as subjectivity, phenomenal character, or a first person ontology, that simply can’t be explained in terms of of objective, non-phenomenal, things with a third person ontology. On the face of it this doesn’t look like an argument from ignorance, because it seems to be asserting that there is some logical incompatibility between two kinds of properties that prevents either from explaining the other. But what reason do we have to believe that such explanations are impossible? Certainly we don’t know how to explain one in terms of the other. Nor have we ever seen such an explanation. But – unless better reasons can be provided – this means that at its root the claim that these two sorts of properties are incompatible rests on an argument from ignorance, ignorance of how properties of one sort might be explained in terms of another. And thus, again, the argument as a whole is nothing more than a disguised argument from ignorance.

A third popular argument to consider is the conceivably argument. In its simplest form the argument runs as follows: we can conceive of the mind as distinct from the body, thus it is possible for the mind to be distinct from the body, and thus the mind is not identical to the body. So materialism, which claims that the mind is in some way identical to the body must be false, and dualism true. But why can we conceive of the mind as distinct from the body? Indeed, what in general limits how we conceive of things? One limiting factor, among many, is how we understand them, which in turn involves how we explain them. Allow me to illustrate with gravity as an example. In modern times the phenomena of gravity is reduced to the curvature of space-time. Thus, if the argument for dualism presented makes sense, we must not be able to conceive of gravity as distinct from curvature in space-time. But of course not everyone is so conceptually bound; someone who lived before Einstein might have conceived of gravity as caused by tiny and invisible springs connecting things. They can conceive of gravity as distinct from curved space-time. If we can’t it must be because our explanation of gravity in terms of curved space-time puts limits on what we can conceive. But this means that the non-existence of an explanation of the mind in physical terms is a hidden premise in the argument (that underlies the claim that we can conceive of it as distinct from the body, along with whatever other factors limit conceivability). So either the argument begs the question or, more charitably, it essentially rests on an argument from ignorance.

Such arguments for dualism make it look like a very unappealing theory, at least in my eyes. They make dualism look like a theory that takes intuitions and superstitions more seriously than scientific inquiry, such that they can set the limits of what science can and cannot explain. They make dualism look like a theory cast from the same mold as vitalism, inasmuch as vitalism claimed that there was something special and irreducible about life that could never be explained in merely chemical terms. I don’t think that this has to be true of dualism; dualism does not have to be an anti-scientific philosophical position, and by casting it in such a light the arguments from ignorance discussed above do much more harm to the theory than good.

B: Ontological Dualism

So what then might a reasonable argument for dualism, and a reasonable form of dualism, look like? The first step towards such an argument is to stop playing the materialists’ game. The materialists cast the question as about how consciousness can be explained. They argue that it can be explained physically, and thus scientifically. Which means that if the dualist agrees to fight them on their own terms he or she will fall into a position that entails that consciousness is something outside the ability of science to explain (at least not unless some new basics mental entities or properties are added to science).

The dualist can and should deny this characterization of the question and of the difference between materialism and dualism. The materialist, so described, is not even doing philosophy proper. It is not the job of philosophy to explain how things work in terms of simpler things; that is a scientific problem (or at least it hasn’t been ever since since science was split off into its own discipline). What the materialist has been doing is no more than asserting that a scientific problem can be solved scientifically. But the task of philosophy is to say what things are; an explanation in philosophy is one that tries to explain the nature of things, not how they work. The mind-body problem, as a philosophical problem, is an ontological one – one that deals with how we categorize the world – which is orthogonal to whether consciousness can be explained in terms of or reduced to purely physical entities and properties.

Ontologically we are interested in what kinds of things there are in the world. Now we could construct a scientific ontology, where we divide the world along the lines of scientific explanations. But nothing forces us to adopt such an ontology – it is just one possibility. With an ontology we are trying to capture significant differences and similarities between things; even if one object in our ontology reduces to or can be explained in terms of some other items in it we aren’t forced to place them in the same category. A computer, for example, is nothing but silicon and electrons at the physical level. However, computers are of great interest to us. There are a number of properties that are peculiar to computers, such as the ability to run certain pieces of software, and often computers as a class are pertinent in ways that silicon and electrons in general are not. Thus it could be argued that it makes sense to treat computers as an ontologically different kind of thing than silicon and electrons in some contexts, despite the fact that there is nothing in a computer over and above silicon and electrons, and even though every property that the computer has can be shown to ultimately arise from properties that the silicon and electrons have.

For essentially the same kind of reasons it makes sense to treat the mind as a different kind of thing than neurons and amino acids, even if we admit that in some way every mental property can ultimately be shown to arise from (and thus reduce to) properties of the neurons and amino acids. Only in consciousness do we find genuine intentionality. Only in consciousness to we find a genuine perspective that the world is presented to. Only a consciousness can impose meaning onto the world. If we are interested in such things, and many philosophers certainly are, then it makes sense to treat minds as their own kind of thing. Yes, perhaps we could discuss intentionality one day by referring to some complicated neural structure. Doing so, however, would only serve to obscure the issue. It is intentionality that is interesting philosophically, not the particular neural structure that may or may not underlie it (although it is surely interesting to cognitive scientists). A change in the neural explanation of intentionality should have no consequences for a philosophical theory involving intentionality (which it would if we tried to replace any use of intentionality with such a neural explanation). I call a form of dualism that takes the ontological nature of the problem seriously, and which argues that there is a significant ontological distinction between the mental and the physical, ontological dualism. Ontological dualism is not forced to rest on arguments from ignorance, because ontological dualism is not an attempt to deny the possibility of certain explanations. Rather it aims to demonstrate something positive, namely that there is a philosophically significant difference between mind and body.

Now a materialist may respond to this proposal by claiming that I am merely playing a game with words. If an ontology doesn’t bring with it entailments about how things are to be explained or about what properties are fundamental (in the sense that others can be reduced to them, but they themselves cannot be reduced) then what good is it? What does it matter if I divide the world up into categories if the categories don’t bear on such questions? By asking this the materialist would miss the point. By dividing the world up into categories we make a number of claims, they are just not of the sort the materialist is used to. By proposing an ontology we are making a claim about what the most philosophically significant and interesting divisions between things are. If we place minds in one category and mindless physical objects in another we are asserting that the distinctive properties of the minded category are substantially different than those in the mindless category and are of philosophical importance. This is why we would reject chairs and non-chairs as a division at the top level of an ontology. The difference between some chairs and non-chairs is not very substantial, and, more importantly, whether something is a chair or not is of little philosophical importance. No philosopher has ever given the property of being a chair play important role in their theories, but many have made the property of having a mind, or something only minds do, central. To call this enterprise merely semantic is thus to deny that how we categorize the world is of any importance. But it is of great importance. How we categorize the world shapes what “kinds” of phenomena we are interested in explaining. And how we categorize the world shapes what “kinds” of phenomena we develop our philosophical theories in terms of. Thus our choice of ontology shapes everything else we do in philosophy, both what we investigate, and what we find acceptable as results of such an investigation.

C: Ontological Materialism

Given my presentation of this new variety of dualism I may appear to be claiming that this version of dualism is correct and that materialism is wrongheaded. I do admit to accusing materialists of confusing philosophical issues concerning the connection between mind and body with scientific ones. But most dualists are no better; this is why dualism often ends up seeming anti-scientific. So while I would certainly agree that this version of dualism is more philosophically attractive than existing varieties of materialism there is nothing in principle preventing us from repairing materialism in the same way, of separating it too from scientific questions of reduction. My proposal is not an attempt to end the debate between dualism and materialism by proving one side or disproving the other. My proposal is rather that we shift the content of the debate so that neither side is entangled with scientific commitments.

Materialism can also be construed as a purely ontological theory, as one that proposes that there be no categorical divisions between mental and physical properties or beings with and without minds. Just as ontological dualism implies that the division between the mental and the physical is philosophically significant, ontological materialism asserts that those divisions should play no significant role in philosophical theories. This means, for example, that ontological materialism is incompatible with an ethical theory that limits ethical agency to entities with minds. Such a theory is committed to a substantial divide between the mental and the physical, inasmuch as ethics is only relevant to one of the two. To make this theory acceptable to ontological materialism we would have to characterize the requirements for ethical agency without appealing to minds or mental features. For example, the ontological materialist could make agency contingent on the ability to communicate and reason about ethical concepts. This might still sound like it has a mental flavor, but such a requirement can be understood as behavioral, as being really about how the entity interacts with others, and not about any consciousness, intentionality, experiences, or occurent beliefs it may or may not have.

The debate between ontological materialism and ontological dualism is not easily settled. Is the distinction between mental and non-mental a fundamental and significant part of philosophical theories, or can it be profitably dispensed with (possibly replaced with concepts such as the cognitive capacity to learn, interactions between agents, and linguistic behavior, all of which can be construed as independent from the mental)? Any attempt to definitively answer this question would involve examining theories that lean on a division between the mental and the non-mental and seeing whether that division is an essential and irreplaceable part of the theory. That examination in itself could be of great philosophical worth. To return to ethics again: considering whether having a mind plays an important and indispensable role in agent-hood, or whether it is just an easy way of ruling out rocks and trees, could provide new insights into ethical questions. A cursory inspection, though, makes ontological dualism appear to be the superior theory. Such a large number of philosophical positions appeal to minds or mental features that it is hard to see how the ontological distinction between mind and body could be removed from philosophy as a whole. Certainly ontological materialists have their work cut out for them.

September 21, 2008

The Big Picture

Filed under: Language,Mind — Peter @ 11:39 pm

December 22, 2007


Filed under: Mind — Peter @ 12:00 am

The topic of what makes something a fake is one that appears with some frequency nowadays in connection with discussions concerning whether a mind implemented with silicon and electrons is somehow a fake version of one implemented with lipids and ions. But before we can turn to such more complicated questions I think we need to turn our attention first to what makes a fake in simpler cases, which is not always as easy to say as it may first appear.

The simplest cases involve things such as fake rocks, which may be actually made of Styrofoam. In such a case we compare the fake to a genuine item and point out that there are discoverable differences between them in their properties. In the case of the fake rock we point out that while it may look the same as a real rock that it is not actually composed of the same stuff, and thus falls short of the real thing. However there are plenty of cases where we can’t define what constitutes being fake in that obvious way. Consider, for example, what makes a painting a fake Picasso. It’s not that Picasso made his paintings out of different materials than the fake, because while it is a fake Picasso it is a perfectly genuine painting. What makes the fake a fake in this case is not something discoverable by inspection; it is its origin. Still though we make our distinction about what makes a genuine item different from a fake based on some properties, even though those properties may not be something that we can point out as they may be extrinsic to the item itself.

But even that conception of what counts as a fake is not sufficient to cover all cases. It works for some limited cases when the genuine items all have some simple set of properties in common, but it is perfectly possible to consider broader kinds of things that are not so easily definable. Consider, for example, complicated tools such as pianos or calculators. What makes a piano a piano or a calculator a calculator has very little to do with its construction, because we can imagine making them in a wide number of different ways. Rather what makes a fake different from a real item in these cases has something to do with how it functions. For example, a fake piano might be something that simply looked like a piano but didn’t play music, or if it did play music it did so by playing predetermined recordings of songs rather than actually responding to how it was used. Similarly we can imagine a fake calculator that at first appears like a normal calculator, but which only has a few sums built in by pre-computation. Again we might say that certain properties distinguish the real items from the fakes, in this case functional properties, but such properties are significantly different than the simpler properties of composition or origin considered above.

So far so good, but there are still more problematic cases to consider. What about, for example, fake emotions? We know that there are such things as fake emotions; someone may act as though they are feeling something but they may actually be devoid of such feelings. However, they may display the same behavior as someone who is really feeling emotions. Obviously the cases that we know about will be because they have slipped up and betrayed the fact that their emotion was fake, but we can easily imagine cases with no such slip-ups, and thus which can’t be distinguished from the real cases in that way. What we want to say is that the person with fake emotions is just acting as if they were really feeling something, while the person with real emotions actually has those feelings. But in making such a statement we inadvertently appeal to the distinction between having a real feeling and not having one. That simply moves the burden of distinguishing the real emotions from the fake ones to a task of distinguishing real feelings from fake ones, which isn’t any easier.

Let’s consider in more detail exactly how fake emotion works in order to shine some more light on the problem. To fake an emotion obviously the person doing the faking must have some idea concerning how people undergoing the real emotion behave; they must have a mental model of that behavior. Thus the person faking that emotion must do something like the following: first they decide that they want to appear as if they had some emotion as a result of inner processing. Then they consult their mental model of that emotion to decide what someone with that emotion would do in their place. The results are fed back into their inner processing, and which leads to a decision to either act in that way or to abandon the pretense if acting in that way is too inconvenient. In contrast consider how someone experiencing a real emotion decides how to act: their inner processing simply leads to a decision, the emotion exists as an alteration in how their inner processing usually works. Schematically then the two are as follows:
Fake: inner processing → mental emotion model → inner processing → behavior
Real: inner processing → behavior
The problem then comes in distinguishing the first from the second, because the chain consisting of inner processing → mental emotion model → inner processing is simply a complicated case of our normal mental operations, and in the inner processing in the case of real emotions may be complex in its own ways. Why doesn’t the emotional model in the fake case count as a real emotion given that it influences the inner processing in much the same way as the real version influences inner processing?

Such thoughts may even lead us to consider whether we really know that our own emotions are genuine. And such worries cannot be dispelled by playing linguistic games, and saying that what we call an emotion just is whatever we actually have, because when we wonder whether we posses genuine emotions we are wondering whether we possess essentially the same kinds of emotions as everyone else, and not just an emotion model that we are unaware of. Isn’t it possible that most people have real emotions while a few only have a model of what emotions are like, and who don’t know the difference because they have never experienced the real thing? Hopefully we all acknowledge that the idea is absurd. There is something wrong with conceiving of someone as an emotional faker who doesn’t know that they are faking them, because in a way the awareness that the emotion is fake seems like a key component of making it fake; while the real emotion is spontaneous the fake one is controlled.

Indeed I think that we can appeal to the idea of control to distinguish real from fake emotions. In the case of fake emotions weather to act emotionally is completely under control of conscious decision-making. Consciously the person faking an emotion consults their model of what that emotion is like, and consciously they decide whether to act in accordance with that emotion. In contrast real emotions exert control over conscious decision-making. The fact that someone is feeling an emotion causes them to make decisions differently, without a previous decision of theirs to be affected in that way. Of course this is not to deny that we can exert some control over our real emotions, but that control only occurs as a result of resisting them, which means that even in that case our emotions are controlling us, but that we have created opposing forces that keep their influence in check. (Which could give rise to a person who is really feeling a particular emotion but has it under complete control also faking that emotion in order to display it, despite the fact they don’t have to.) And that is why it is absurd to imagine someone who isn’t aware of the fact that they are faking emotion, because faking emotion requires conscious control over the affects of emotion, and you can hardly have that control without being aware of it.

Now we can answer the complicated question that motivated all this: can computers experience real emotions? If the account I have given here is correct the answer it yes, they could, if properly constructed. Obviously for the possibility of having emotions to exist the computer must be executing some process that counts as consciousness. And let us also suppose that the computer acts in ways that we are tempted to describe as emotional. To decide whether these emotions are genuine all we have to do is consult how the computer comes to the decision to act in that way. Does the process that is the computer’s consciousness consider what an emotional response would be like and then decide to act on it? If that is the case then the computer is just faking emotion. However, if the computer acts in that way because of changes in the process that is its consciousness, changes which are outside of its direct control, then it has real emotions. At least they would be as real as ours are. Since a computer might be programmed in that way I conclude that it is possible for computers to have emotions, although no existing computer manifests them.

December 9, 2007


Filed under: Mind — Peter @ 12:00 am

Although there are plenty of agents in the world we recognize that most of the world is best described as inanimate (meaning without agency, not motionless). While things such as rivers, storms, and fires may at times seem to act alive we know better, we know that they aren’t thinking, that they aren’t agents. But, once upon a time, that was not a typical way of looking at the world. Once upon a time everything was an agent, and all activity could be attributed either to agency, and thus desire, or things being acted upon. Some have even suggested that it is more natural to view the world in that way, that we are hard-wired to project agency into the world (because an over projection of agency was better than an under projection for purposes of survival), and that it is only by fighting that tendency we arrive at the view of the world as mostly inanimate that we do. I would argue that projecting agency into the world is also natural in another way. Obviously whenever we answer a “why?” question further such questions will be raised, and people tend not be completely satisfied with the answer to one until they are all answered. Attributing agency to the world gives an intuitive way to end this chain of questions, because once we attribute desires to explain why things happen we are naturally inclined to stop asking “why?”, because we don’t ask that question about our own desires. We are accustomed to the fact that our desires are simply bare facts, and thus are less tempted to ask why agents have the desires that they do, and more willing to just accept it.

Given that perhaps it is reasonable to ask why we stopped projecting agency into the world. Perhaps the reasons were purely pragmatic, because while cutting off the possibility of further questions is intellectually satisfying that is not how progress is made. And it might also be simply because we stumbled upon successful explanations of the world that didn’t involve any kind of agency. But those facts alone don’t necessarily stop us from treating the world as full of agents. We could, for example, treat the fundamental particles as agents, and explain why they follow the laws that they do by claiming that they desire to follow those laws. The fact that certain particles are attracted or repelled from each other almost calls out for a psychological explanation. Or we could attribute agency to the universe as a whole, explaining why things occur as they do because of the universe’s desire for the fundamental particles to move in the patterns that they do. Obviously this would entail accepting agents with some bizarre desires, but that doesn’t seem completely out of the question since back when the universe was originally explained in this way it was hypothesized that earth desired to be near other bits of earth, a desire that is fairly close to the kind we would have to attribute to the fundamental particles.

Part of the reason we aren’t inclined to take that route is probably because it would make the desires of the agents essentially unfathomable. Which is a problem as we would be thus making more intellectual work for ourselves by adding agency than we would by leaving it out of the picture. Usually we conceive of agency by uniting the desires of the agent under certain umbrella goals that reflect how things can go well or badly for them. But, while we might assign desires to things such as fundamental particles, there is little hope of devising goals that those desires might serve, because the desires will be a random looking mess that are justified not because they make sense but because they work when it comes to predicting what the particle will do. It is hard to imagine what things going well or poorly for a particle could even mean. Certainly they aren’t striving for existence, because some particles possess “desires” that lead to their own destruction. And it is equally hard to attribute goals to the universe as a whole as well. Not necessarily because it is unimaginable to do so, people often credit the universe with caring about their existence, with wanting humans and life in general to thrive. When it comes to the universe the problem is simply that none of these overall goals is compatible with the way the universe actually “acts”. If the universe is an agent it is one that is indifferent to the plight of the beings within it, because it makes the particles obey the same laws in every circumstance, never deviating from the laws, no matter what is going on. Thus if the universe as a whole were to have goals they would have to be completely alien, such as the goal of constructing a “beautiful” (by some standard) pattern of particle interactions.

That is one problem with projecting agency into the world in modern times. But I suspect the most significant reason that we no longer are tempted to project that agency is because we have developed more refined ideas about the nature of agency. As we constructed more complicated machines it became obvious that there was a difference between behavior and agency, lots of things could display complicated behavior but agency requires a mind, which we know our machines lack because we didn’t put one in while constructing them. And minds are complicated things, they aren’t just going to be found everywhere. Moreover by being complicated minds are the sorts of things we now demand explanations of, and so if we were going to start attributing minds to particles or the universe we might legitimately wonder where their minds were and how they worked.

But what exactly is this new conception of agency, and why are we willing to attribute it to cats and dogs but not tables and chairs? Obviously there are many differences between those things we treat as agents and those that we treat as simply complicated machines, but I think they key difference is that agents display a certain flexibility that things lacking agency do not. Just because an agent responded a certain way in one situation doesn’t mean that they will respond in that exact same way in the future, the agent learns and their behavior changes. If acting in a certain way frustrates their desires they will cease acting that way in the future, and, conversely, if something is particularly successful they will tend to try it again. In fact it might be argued that it is from this flexibility that we deduce what amounts to things going well or poorly for an agent; things that they adapt to avoid mean that they count as things going poorly for them, while things they adapt to seek are things going well. This also entails that there is a continuum between things that lack agency and those that possesses it, because things can be adaptive to a number of different degrees, but in normal situations the things we encounter lie clearly on one side of the divide or the other, and so we rarely find ourselves in a puzzling situation where we can’t determine whether something is an agent.

This division is more than an arbitrary one, what is a productive strategy is substantially different when it comes to dealing with agents instead of non-agents. And that is a pragmatic reason not to project agency onto the world, because if we did we would tend to approach the world with unsuccessful strategies. Obviously when it comes to things that lack agency they are completely predictable, if they behaved a certain way in one situation they will behave the exactly the same way if a similar situation arises in the future. But, on the downside, this also means that there is no way we can change their behavior except by directly taking measures that affect those things (such as damaging them or changing them), or by affecting the rest of the world so as to control what situations arise. Agents, on the other hand, are not as predictable. Admittedly on some level agents may be composed of mechanical components, and thus be predictable in some principled way. However, for all practical purposes they are unpredictable, because if you take your eyes off them for even a second there is always the possibility that they will have experiences that will affect how they behave, invalidating your predictions. On the other hand, because of their mutability, the behavior of agents can be controlled more subtly. A sufficiently intelligent agent can be reasoned with, bribed, or threatened, and less intelligent agents can be trained. And that is why it is an extremely bad idea to treat the universe, or inanimate things in general, as agents. No matter how much you threaten or bribe them they will act in the same way they always have.

November 25, 2007

Tool Use

Filed under: Mind — Peter @ 12:00 am

Two obvious abilities that separate people from the rest of the animals are language and tool use. Of course even very primitive animals use objects in their environments, birds, for example, use nests. But humans display the ability to learn to use tools, while the simpler animals only can use those objects that they have been evolved to. The bird’s genes allow it to build a nest, but it is our minds that let us use a hammer. Both abilities are a bit of a mystery; we haven’t successfully developed software that can use language or which can learn to use tools. Since language usually gets all of the attention today I am going to focus primarily on tool use, and investigate what makes us able to use tools when other animals can’t.

Perhaps saying that those two abilities distinguish humans from the rest of the animals is a bit misleading though. Obviously no other animals can use tools or language in the same way we can. However, certain other primates seem to display primitive forms of language, and can even be taught simple languages understandable by us, which they can go on to teach each other. And these same primates seem to be able to master some simple tools as well. This raises the interesting possibility that somehow language and tool use are connected, that their appearance in the same species is more than a coincidence. Perhaps being able to use language is a prerequisite for being able to use tools, or vice versa, or possibly there is some common cognitive capability that is responsible for both of them. But, then again, given that we have so few samples to draw on it could very well be that it is coincidence that the species we know of display both language and tool use.

It’s hard to believe that language is somehow responsible for our ability to use tools, simply because tool use doesn’t seem much like a linguistic act; using a tool doesn’t serve to express a thought. Of course language could be construed as a kind of tool for manipulating people, in a very broad understanding of what a tool is, and from that angle it might seem plausible to claim that the ability to use tools underlies language. But I doubt this is a reasonable connection to make, simply because the way in which language is a tool would make any ability a tool. The ability to use tools seems to necessarily involve a capacity to pick up, and possibly shape objects, and then use them purposefully to accomplish something that couldn’t be don’t without them. And, although we can do things with language that would be impossible without it, language is certainly not a case where we must use objects to do what we wish. Language thus seems as much as an example of tool use as running is, given that we can accomplish things by running that we couldn’t do by walking. Thus if there is a connection between the two what we must be looking for is some common ability that underlies the both of them, that once it exists can manifest itself both as the ability to use tools and language.

Towards that end we might look for what capabilities allow us to use language and which allow us to use tools and see whether there is some overlap. Language is certainly the simpler case so we can take a brief look at it to begin. The primary cognitive capability behind language is the ability to allow one thing to stand for another, specifically that allows us to take certain symbols as standing for other things or ideas. And of course the capability to learn makes language practically possible, because it allows us to develop such representative connections without being born knowing them. But, when it comes to what allows us to use tools, the answer isn’t so obvious, probably because we have spent much less time thinking about the matter. Making machines that can learn and communicate has always seemed the higher priority, tool use is often something that is thought to be a problem that will be simple to solve once we have the first two mastered. And maybe it will be, but that those priorities haven’t left us with an easy way to think about what goes into it.

We might suppose that tool use is simply the product of being able to learn from experience. We could even tell a story about how tools begin to be used invoking only that capability in an explanatory role. Suppose, we say, that an individual is simply toying with some object in their environment, an accidental behavior that serves no purpose. And sometimes they won’t put these objects down when they go about some important task. This opens up the possibility that the object already in hand will simply happen to be useful for that task. Success with that object reinforces that behavior, and thus tool use is born, as the individual will develop a tendency to pick up the same object again when they go about that task in the future. But many animals have the capability to learn from experience, although some better than others. And yet it is only a few species that do display tool using behavior. Again, this may be because of an accident of nature, it may be that the primates are simply the only species with the right biological equipment (thumbs) to use tools. But, on the other hand, if tool use really was that easy then we would expect many animals to be able to use very simple tools with just their mouths. And since tool use is advantageous evolution would select for those individuals that are better able to use tools, which would eventually result in animals with the right equipment. So I suspect that in this simple story there is something missing, something that prevents most animals from using tools in the way described.

It is possible that what the rest of the animals are missing is some form of curiosity or willingness to experiment that drove our ancestors to fiddle with sticks and try poking things with them later. While that does explain why we are different than the rest of the animals it leaves a number of questions unanswered. First of all curiosity doesn’t seem like something that would be that hard to evolve, and so if curiosity was really the key to tool use then we would expect more species to exhibit it. Secondly, it doesn’t explain why the primates that are able to use simple tools are unable to develop more complex ones. If the barrier is really just the unwillingness of most animals to try new things the primates, who do try new things, should display essentially the same capacity for tool use as we do. So if curiosity isn’t the culprit we might instead finger our more developed ability to model the world in our mind and to plan out complex actions. Instead of supposing that tool use is developed by random experimentation we might instead attribute it to moments of discovery, when an individual realizes that an object used in a certain way might work better than their natural equipment. And such moments are only possible if we can envision the outcome of events at a sufficient level of detail to both predict outcomes where nothing similar has happened before and to build up those predictions from general rules rather than a collection of observations about specific objects and their interactions (because that would prevent us from being able to mentally add a new object into the mix). This explanation is better than the previous one, because it explains how tool use can come in degrees, based on how well the tool users are able to mentally model the world. However, now our explanation seems to be working against us; the ability to model the world seems more complicated than the ability to use tools and thus in need of an explanation itself. To satisfy our curiosity about the connection between tool use and language we need either something simpler or to break down that capacity into its own components.

Rather than following this road any farther let’s start over, and this time attack the problem from the opposite direction; instead of trying to find the explanation for why we can use tools by itself and then seeing whether it overlaps with our ability to use language, let us proceed on the assumption it does, and see if we can explain our ability to use tools on the basis of any capabilities behind our ability to use language. If they are to have something in common then it must be the ability to allow one thing to stand for another, since the capacity to learn has been rejected earlier as a sole explanation. This suggests that we might attribute our ability to use tools to our ability to associate certain actions or goals with the objects that are the tools themselves. Conceptually this would be the transition from seeing a stick as a stick to seeing a stick as both a stick and a pointy thing to get food with. Without this capacity we might suppose that systematic tool use would be impossible. At best the animal could learn behaviors, and maybe if the tools were always in a fixed spot a very complicated behavior could be developed involving fetching a tool and then doing something with it. But this doesn’t allow the kind of developments that real tool use comes with, for example the ability to substitute similar objects for each other; although a hammer is a hammer anything heavy and hard can be a hammer too in a pinch. Obviously this is, as described here, at best an incomplete proposal. How exactly do associations work? What gets associated with what? And how exactly do they lead to complicated tool use? I’ll set aside such questions for the moment and simply say that it has the ring of truth about it, and explains nicely why language and tool use seem to come hand in hand without appeal to some ill conceived idea of a linear range of intelligence where they both fall near the top.

Next Page »

Blog at WordPress.com.