Functionalism can be defined as the theory that consciousness depends on, or is identical with, the functional properties of the brain. But, when you examine this claim to determine what it really says, it turns out to be so general that it is basically materialism with the possibility of the identity theory ruled out. Of course both functionalism and the identity theory are materialist theories, and so both can only explain the mind by appealing to physical facts. The difference between the two is really that the identity theory holds that all the physical facts are relevant, while functionalism holds that only a subset of them, the functional ones are. The names are a bit misleading then, since even under functionalism a particular instance of a mind (which could, in theory, be realized in a number of different ways) is identical with the physical facts that are realizing it.
But, because functionalism is so broad, it tells us little about the mind. Thus materialists interpret the central hypothesis of functionalism, that the mental properties depend on the functional properties of the system, in a number of different ways. Here I will consider three broad categories of interpretations.
The first I refer to as teleological functionalism. Teleological functionalism interprets the idea that the mind is the function of the brain as meaning that the mind is the goal, or purpose of the brain, much like the function of a can-opener is to open cans. Although it seems unlikely it certainly it isn’t impossible. If consciousness had some survival value for the conscious being evolution would have selected for it, resulting in an organism that devoted some of its biological resources to the function of being conscious, just as gills evolved to have the function of extracting oxygen from water, and the beaks of hummingbirds to have the function of extracting nectar from flowers. The problem with this interpretation is not that it is implausible, in fact I think it likely that consciousness has some survival value, it’s just that interpreting functionalism in this way leaves us in the dark as to how the brain accomplishes this function, what consciousness itself is, ect; it leaves unanswered the various questions that functionalism was supposed to address, meaning that in addition to this interpretation of functionalism we would need to endorse another additional interpretation of functionalism or a version of the identity theory.
A second interpretation of functionalism, advocated by Armstrong and Lewis, is called the causal theory, or causal functionalism. Now this shouldn’t be confused with the causal theory about reference, although the two are related. Generally if you accept one of the causal theories it seems natural to accept the other. In any case, the causal theory holds that mental properties, and consciousness, depend on causal relations. And certainly this seems like a plausible interpretation of functionalism, since function and causation are closely connected. In essence causal functionalism holds that a specific mental property or state is simply something that causes a certain range of behaviors and mental states and is caused by a certain range of perceptions and mental states. But there is an obvious problem with this version of functionalism, which is that it tends to collapse into behaviorism. Consciousness is one of the things that we want our theory about the mind to explain, and how can causal functionalism explain it? Causal functionalism makes no claims about the nature of the states that cause behavior and are caused by perception, and so when we attempt to explain consciousness we are either going to have to admit that it is unexplainable by this theory (i.e. the theory can explain why you claim to be conscious, but it cannot explain why you experience being conscious) or claim that it is only some kind of behavioral illusion. So a causal functionalist theory might not be wrong, in the sense that it might correctly predict behavior, but it can’t answer the questions about the mind that we developed the theory to investigate in the first place, it can only explain our behavior, and behavior is not the deep mystery.
A third interpretation of functionalism is the informational interpretation. This version of functionalism holds that mental states depend on how the system processes information, and what information it is currently processing (note: the way in which the system processes information, not just what information comes in and out, see here). In the brain this information is encoded in the form of neuronal state and electric signals, but in general it could be anything. Of course this raises additional problems, such as determining what counts as information and what the content of that information is. But it certainly seems like it might have the potential to answer the questions we want it to. If we can show that certain states in these systems can be said to correspond to “thoughts”, and how these thoughts must be conscious based on how they are incorporated into the operation of the system, then we might very well have an answer, a theory about consciousness framed in objective terms. Certainly though there is no guarantee that this is the right framework in which to frame our theory, and there might be objections that rule it too out as a possibility, but of the three it seems the most promising.
Now, as a final note, I would like to mention “strong-AI” functionalism. Strong-AI functionalism isn’t really a separate interpretation of functionalism, but rather two additional positions that might be taken with regards to the possibility of conscious computers. One version of the strong-AI thesis is that the mind is simply a kind of computer. Now obviously the mind is not a von Neumann architecture computer (the kind you and I are using), unlike our computers it does not have a sequence of instructions it is following to perform operations on some data set. However, when the claim is made that the mind is a computer we should probably assume that the claim being made is really that the mind is a kind of universal Turing machine. Whether this is or isn’t the case depends on how exactly the world works, whether a Turing machine can simulate any natural process. For if a Turing machine can simulate a neuron then our brain is functionally equivalent to a large number of Turing machines operating in parallel. And any number of Turing machines can be simulated by a single Turing machine (a thesis proved by Turing himself), so then yes, the mind would be a very unusual kind of Turing machine. The other side of the strong-AI thesis is simply the claim that computers can be conscious. And under the third interpretation of functionalism they certainly can be, since consciousness would only depend on processing information in a certain way, and a Turing machine (a computer) can process information in any possible way (another thesis proved by Turing).
I used to buy the strong AI arguments, but lately, I’ve been thinking a lot about what the implications of the “hailstorm” argument that I mentioned back in the comments to your “Are Rocks Conscious?” piece. (http://www.edge.org/3rd_culture/lanier03/lanier_index.html) Now my concern is that if our mind is just a form of computing, it’s hard to see how it isn’t just equivalent to the calculations implicitly performed by any sufficiently complex physical system. If you don’t have to prescribe the way of interpreting the computer, anything can be used to calculate anything. The problem is, I don’t see how there can be anything outside of us that prescribes the interpretation of our brain activity.
But then maybe I’m being too homunculus-like in my thinking?
Comment by Carl — January 20, 2007 @ 2:38 am
The idea is to show that only certain interpretations of the data are valid in the situation given how that information is linked to behavior/intentionality/representation.
Comment by Peter — January 20, 2007 @ 4:15 am