On Philosophy

May 18, 2007

Functional Facts Are Fundamental

Filed under: Metaphysics,Mind — Peter @ 12:00 am

Imagine that one day that basic components of the world were instantaneously replaced by other things, in a one-to-one correspondence, which had the same patterns of interactions (the same functional properties) as the things they replaced. Would we notice the change? I don’t think we could. Since all our instruments, both scientific devices and our natural ability to perceive the world, are working the same way as they did before they would report the same information to us. This is of course an extension of my previous claim about properties, namely that the basic properties are best identified with a specific kind of causal disposition, which is another way of saying that the basic properties are functional properties.

Now if we accept that the world would remain exactly they way it currently is, at least apparently, as long as all the functional properties are preserved, then we must accept that every property ultimately reduces to functional ones. I think that most would agree to this in principle for many cases. But there are those who may insist that certain properties depend on the nature of the substance. For example, some may argue that the property of being metal depends on the object really being composed of certain kinds of atoms, and not just having certain properties. But what is it to be a certain kind of atom? Well since everything is effectively unchanged under replacement, given that functional properties are held constant, we can conclude that being a certain kind of atom is effectively identical to having a certain complex causal disposition. Thus to identify being a metal with being made up of certain kinds of atoms is really to identify being a metal with being made up of components with certain kinds of causal dispositions. But that is a functional property (it depends on the functional relations between parts of the object and the functional relations of those parts to the external world). And hence being metal is really a functional property, even though it doesn’t seem like one.

Since everything reduces to functional properties we can conclude that consciousness too reduces to functional properties. Obviously his contradicts, to some extent, the claims of those who say that consciousness is a uniquely biological property. But, they might claim that the functional properties found in biological systems are unlike those found in systems like computers. Biology, they might point out, works in parallel, while computers are serial. To determine if this, or a response like it, can succeed in defending the position that consciousness is a purely biological affair we need to understand how a system is determined to have various functional properties.

Specifically I would like to point out that a single system can be validly described as having numerous different sets of functional properties. Consider a system with three basic parts, A, B, and C. And for simplicity let us say that A only acts upon B, B upon C, and C upon A. Obviously one way to describe the functional properties of this system is in terms of A, B, and C. Let us call this the [A-B-C] description of the system. But we could equally validly describe the functional properties of the system in terms of the interaction between A and the combination of B and C. We can call this the [A-BC] description. Obviously the [A-BC] description in some sense leaves out some of the information that the [A-B-C] description includes, but of course this doesn’t make it an invalid description of the system. Now we can consider a system with two parts Y and Z. And this system might have a [Y-Z] description. Now it is perfectly possible for [A-BC] and [Y-Z] to be the same description; both descriptions include two parts, and there is nothing preventing those two parts from having the same interactions. If this is indeed the case then the A-B-C system and the Y-Z system would have some of the same functional properties, and if the Y-Z system has some property by virtue of its functional properties then the A-B-C system has it too, since there is a valid description of that system in which it has all the functional properties of the Y-Z system.

Obviously there are a vast number of functional descriptions of the human brain. Of the almost unaccountably many functional descriptions that fit the brain there are four that we tend to talk about most often. One is the description that captures every interaction of every atom, one that captures the interactions of the neurons, one that captures the interactions of the neuron groups, and one that captures only the interactions of the brain with the external world. We can call these descriptions [atomic], [neuronal], [cluster], and [whole] respectively. Now if the biological brain has the property of being conscious it must be because some particular functional description can be said to apply to it. Remember, we have already established that all properties must be identical with some kind of functional properties. Let’s suppose, arbitrarily, that the important functional properties are the ones captures by the [neuronal] description. (Note: identifying consciousness with the properties of [whole] is behaviorism.)

Obviously the functional description of a computing machine that captures every fact about it will not be identical with any functional description of a biological brain. But, like the brain, we can describe the functional properties of the computer at higher and higher levels of abstraction. At some level the relevant parts captured by the functional description of our computing machine will correspond to the interactions between various software constructs, and not directly to any of the features of the physical implementation of the machine. But the functional interactions of software constructs are completely arbitrary, we can program the machine so that the software constructs have any sort of functional properties we wish, given that we have some finite description of those properties in mind that we can program them to conform to (Church’s thesis*). Certainly we can program them to display the same properties as [neuronal], since that description is finite and well defined, since the brain is of finite size and is governed by natural law. Thus we can program our computer to have the functional properties that we had previously said that consciousness depended on. And this is true no matter which set of the functional properties of the brain that we think consciousness depends on.

Thus, if we accept that all properties are ultimately functional properties, consciousness, being some functional property of our biological brains, must ultimately be able to be found in non-biological systems of sufficient complexity as well.

* The reason (or one way of looking at the reason) we can’t program computers to do things like solve the halting problem is because when you begin to specify in detail how the machine should function in order to solve the halting problem you begin to realize that you can’t give a finite recipe for determining if an arbitrary program will halt without running it and seeing whether it does halt or not. Of course humans can’t solve the halting problem either, so it is not like this is a serious limitation on the power of computers, any more than it is a serious limitation on our thinking power. And yes, the functional properties of a process that acts in parallel can be fully captured by a computing machine that works serially.



  1. This reminds me of some discussions I had a while back that prompted this post:


    Since everything reduces to functional properties we can conclude that consciousness too reduces to functional properties

    Why exactly is that? David Chalmers offers virtually the same thought experiment you do, except for he’s just replacing neurons with silicon, his point being that consciousness is preserved. Yet, he concludes the exact opposite view of yourself, for him this hardly means consciousness reduces to functional properties because he can imagine either scenario (with neurons or without neurons) in a logically possible world identical to ours where there is no conscious experience. I’m not going to say I necessarily agree with him, but, I do believe he is right that constraining consciousness nomically by functional states doesn’t guarantee that consciousness logically supervenes on those functional states.

    Moving on to those who insist on biological properties making a difference, while I ultimately disagree, if your charge is calling out say, John Searle, then I think we have to be careful and at least note that Searle’s definition of functionalism is somewhat unique and it would be easy to beg the question against him.

    This gets tricky…I’m thinking I have a post on it somewhere.. but Searle, for instance, doesn’t just say that a machine doesn’t posess the right kind of “stuff” to be conscious. He thinks a machine *could* be conscious. But AI will never get there. If you go through my link, I think I said something about Searle’s “causal capacity” being fairly close to how David Chalmers defines functionalism and his view of functionalism is very different. For Searle, causal capacity is an objective feature of the world but functional capacity is purely ascribed by our minds. Which is why he argues things like the paint on his wall can be conscious in a reductio since we can interpret it just as we can interpret a brain.

    Well, there’s more that could be said but I’ll leave it at that for now.

    I do agree with you that an algorithmic brain wouldn’t be limited by godel or inferior to parallel architectures at the end of the day.

    Comment by AG — June 6, 2007 @ 12:23 am

  2. imaginable != possible; to say that consciousness is determined by something other than functional properties is to say that it is underdetermined, since there is nothing else besides functional properties at some level to determine it. There is a matter of fact whether a system is conscious or not. ∴ consciousness reduces to (is completely determined by) functional properties.

    Comment by Peter — June 6, 2007 @ 12:40 am

  3. I think you’ll get some disagreement on that. Again, Chalmers is a key example since you’re both doing the same thought experiment. See his principle of organizational invariance:

    2. The principle of organizational invariance. This principle states that any two systems with the same fine-grained functional organization will have qualitatively identical experiences. If the causal patterns of neural organization were duplicated in silicon, for example, with a silicon chip for every neuron and the same patterns of interaction, then the same experiences would arise. According to this principle, what matters for the emergence of experience is not the specific physical makeup of a system, but the abstract pattern of causal interaction between its components.

    Functional properties determine and constrain consciousness, but they don’t guarantee a reductive explanation. More importantly, if organizational invariance is true, it doesn’t follow that consciousness reduces to functional properties.

    To see further the motivation for what I just said, to see more clearly the difference between reductionism and determinism and how determinism doesn’t imply reductionism we can move beyond Chalmers to functionalism in general. Why did any naturalist need to become a functionalist in the first place since it’s trivial to just believe the universe is causally closed and that all mental life is determined by the brain? Did Hillary Putnam think something other then the determined physics of the brain caused consciousness?

    If consciousness reduces to determined physical laws, why not just be an identity theorist? Why be a functionalist?

    Comment by A.G. — June 6, 2007 @ 9:44 am

  4. If the presence of X determines the presence of Y than in that situation you can given a reductive explanation of Y in terms of X. That is all that reduction requires. Chalmers is just working with a bad method and a bad understanding of reduction. We moved to functionalilsm because the identity theory is demonstratably false when considering the effect of brain damage on consciousness, and because of paradoxes you can construct with the identity theory. And more importantly you can give a defintion in functional terms as to why a system is conscious that explains why it is consicous which you cannot do with just the identity theory.

    Comment by Peter — June 6, 2007 @ 10:08 am

RSS feed for comments on this post.

Create a free website or blog at WordPress.com.

%d bloggers like this: