5: Mental Properties
Just as we describe physical objects with physical properties we describe the mind with mental properties. Some mental properties describe the character of mental acts, such as their being focused or distracted, some describe the kind of mental act, such as being a thought or an act of will, and some describe the content of the mental act. Describing the nature of mental properties is not absolutely necessary to the task of forming a theory about the mind, for example the proposals about intentionality and qualia outlined previously had no need of them. But, on the other hand, it would be a poor explanation of the mind that omitted them altogether.
The usual way to fit mental properties into a materialist theory of mind is to claim that they supervene on some of the physical properties, which is a fancy way of saying that the mental properties apply whenever certain collections of physical properties hold for the system. Supervenience is certainly a decent way to start thinking about how mental properties are to be captured by materialism, but unfortunately it has its flaws. The primary problem with supervenience is that it doesn’t identify the mental properties in any way with the physical properties. And this leaves the mental properties as something extra, which is certainly not a materialist theory about the mind. And because these mental properties exist in addition to the physical properties it is hard to see how they could be the cause of events. After all the physical world is causally closed, so only physical properties have a causal effect; if the mental properties aren’t in some way physical then they must be epiphenomenal.
We could attempt to overcome the problems with supervenience by re-defining it, but I think this would be unnecessarily confusing. Instead I propose that the theory that the mental properties supervene on the physical be replaced by the theory that the mental properties are dependant on the physical properties. What I mean by dependence is a very limited claim, which is that an instance of a mental property is identical with some collection of physical properties, although instances of the same mental property may not always be identical with exactly the same physical properties. A good example of this kind of dependence, in non-mental terms, is the property of being a colored surface. When we look at the fundamental physical facts of a surface there is no color property, the only properties that exist concern the kind of molecules that make up the surface and their arrangement. This doesn’t mean that we have to throw out the property of being colored as non-existent, however, all it means is that we have to identify it with some of the physical features of the surface. So the whiteness of a piece of paper would be identified with some collection of physical properties of the surface, and the whiteness of snow would be identified with a different collection of physical properties of that surface.
And if we accept that the mental properties depend on the physical properties as detailed above then one of the problems of mental causation is resolved. Since the mental properties are no longer essentially separate from the physical ones they have all the same causal powers of the physical properties that they depend on. To make another analogy the causal powers of mental properties are like the causal powers of solidity. Solidity is a property that depends on more basic physical properties of the object (specifically how strongly the molecules in the object are attracted to each other), a dependence of the same sort that I claim exists between the mental properties and the physical ones. And we have no problem dealing with solidity and causation, the solidity of an object just is how strongly its molecules are attracted to each other, and if that attraction prevents the object from passing through another then we can legitimately say that its solidity prevented it from passing through.
But there is also another less talked about problem with materialist mental causation, which arises from the fact that several systems could instantiate the same mental properties. (The long story as to how this is possible goes as follows: the mental properties depend on the “functional” properties of the system, and systems with widely different physical makeup can have the same “functional” properties, thus the same mental properties can be had by systems that are physically unlike each other in many ways.) The problem is that if the systems are physically different from each other in some ways then even though those systems have the same mental properties there is no guarantee that they will have the same mental properties in the future. And this is a problem because it implies that there are no mental laws, that from the mental properties of the system at one moment no reliable prediction can be made regarding the mental properties of that system at future moments, which is contrary to our experience. The solution to this problem is to embrace functionalism, or something like it, which says that what is important for determining mental properties is how the system develops over time. Given that restriction the requirements for a certain mental property to apply might restrict its occurrence to systems that will develop in the right kind of way, and thus exhibit mental laws. But let me leave the specifics as to what kind of materialist theories can and cannot be suitable theories about the mind until the next section.
6: Shape of a Materialist Theory
We have already uncovered in our investigations some limitations on the shape a materialist theory can take. We found out that it must be internalist, and we found out that it must identify in some way mental properties with some collection of physical properties (in order to overcome the problems of mental causation). Now I will explore these limitations further, in order to reduce the number of possibilities, and thus make clearer the way towards a positive theory, that is a theory that says what consciousness is in materialist terms.
6.1: Identity and Materialism
As I established above we must in some way identify the mental properties with physical properties. But this does not mean we have to embrace the identity theory about consciousness, which identifies a mental state with the totality of physical properties that describe the brain. Such an identity theory is problematic for two reasons. First of all it seems to rule out the possibility that systems other than the human brain can be conscious, since it is unlikely that any other creature, whether alien or artificial, will be enough like us to have sufficiently similar physical properties to be covered by the identity theory. And this is a problem with the theory since it provides us with no good reason to believe that such beings couldn’t be conscious. Another problem with the identity theory is that it doesn’t provide any insight as to why the specific collection of physical properties in our brains are identical with one conscious state instead of another, or why they are even conscious at all.
But just because we reject the identity theory doesn’t mean we have to reject any identification of the mental with the physical. What we need to do is rather to identify the mental properties only with a subset of the physical properties. And the only serious contender for this role seems to be the functional properties (a functional property being identified with a collection of physical properties).
6.2: Argument Against Static Theories
But let us assume for a moment that there was some other, non-functional, theory about consciousness. Such a theory would be a static theory, meaning that whatever the mental properties depend on is a feature of the physical properties that exist at a single moment. Why can’t non-functionalist theories appeal to the properties that exist in future and past moments? Well surely if a system is conscious over a period of time it is conscious for a given moment within that period. Now at this moment all we have to work with is the properties that currently exist, because future and past properties could change with no effect on the current state of the system (assuming we could somehow disrupt past events without propagating those changes to future moments). Those changes shouldn’t affect whether the system is conscious at that moment, since we haven’t changed any of the physical properties that consciousness at that moment depends on. And this means that we can’t appeal to those future or past properties in our theory about consciousness. How is functionalism different? Well functional properties have to do with how the system will change over time, and how information is carried by those changes. Of course functionalism too must deal with single moments, but we can deduce functional properties from the physical properties that exist at a given moment, assuming we have an understanding of the laws that govern them. Thus functional properties have built into them the idea of a usual progression of the system. Even if that progression is somehow violated by our hypothetical meddling we can still appeal to it, because when we talk about the progression what we are really talking about is the how the system is likely to change over time as given by the properties that exist at the given moment.
Obviously the identity theory is one example of a static theory. The theory that consciousness is based somehow on representation or self-representation can also be a static theory, if representation itself is not defined in functional terms (which it very well may be). So what is wrong with static theories? Well, simply consider a system that we consider conscious, and consider what would happen if that system were somehow stopped, frozen with all the relevant physical properties preserved. And then assume that sometime later we restarted the system. According to a static theory, in the time which the system was frozen it should still have been conscious, since all the relevant physical properties, that consciousness supposedly depended on, were present. But if we ask a person who had been frozen like this about their experience they will report that suddenly they “jumped” from the moment they were frozen until the moment they were unfrozen. And thus they will report that they were not conscious during this period. This is not a problem with memory, it is not the case that they have forgotten being conscious, nor is it that they were unable to form memories; it was just that the system wasn’t in action so no experiences could have been had. And this is a problem with static theories, they falsely claim that the temporally frozen system was conscious when really all it had was the potential to be conscious if it were un-frozen.
6.3: Argument Against Teleological Functionalism
But being functionalist in some way is not a guarantee that a materialist theory will be successful. Functionalism covers a wide range of possibilities, and they can’t all be on the right track. Teleological functionalism is one such flawed theory. It interprets the idea that the mind is the function of the brain as meaning that the mind is the goal, or purpose of the brain, much like the function of a can-opener is to open cans. Although it seems unlikely it certainly it isn’t impossible. If consciousness had some survival value for the conscious being evolution would have selected for it, resulting in an organism that devoted some of its biological resources to the function of being conscious, just as gills evolved to have the function of extracting oxygen from water, and the beaks of hummingbirds to have the function of extracting nectar from flowers. The problem with this interpretation is not that it is implausible, in fact I think it likely that consciousness has some survival value, it’s just that interpreting functionalism in this way leaves us in the dark as to how the brain accomplishes this function, what consciousness itself is, among other questions; it leaves unanswered the various questions that functionalism was supposed to address, meaning that in addition to this interpretation of functionalism we would need to endorse another additional interpretation of functionalism or a version of the identity theory.
6.4: Argument Against Causal Functionalism
A second interpretation of functionalism, advocated by Armstrong and Lewis, or causal functionalism, and I would contend that it too cannot satisfactorily answer our questions about the mind. Causal functionalism holds that mental properties, and consciousness, depend on causal relations. And certainly this seems like a plausible interpretation of functionalism, since function and causation are closely connected. In essence causal functionalism holds that a specific mental property or state is simply something that causes a certain range of behaviors and mental states and is caused by a certain range of perceptions and mental states. But there is an obvious problem with this version of functionalism, which is that it tends to collapse into behaviorism. Consciousness is one of the things that we want our theory about the mind to explain, and how can causal functionalism explain it? Causal functionalism makes no claims about the nature of the states that cause behavior and are caused by perception, and so when we attempt to explain consciousness we are either going to have to admit that it is unexplainable by this theory (i.e. the theory can explain why you claim to be conscious, but it cannot explain why you experience being conscious) or claim that it is only some kind of behavioral illusion. So a causal functionalist theory might not be wrong, in the sense that it might correctly predict behavior, but it can’t answer the questions about the mind that we developed the theory to investigate in the first place, it can only explain our behavior, and behavior is not the deep mystery.
6.5: Remaining Possibilities
There are several other possibilities of course. One such possibility is that consciousness depends on the way the brain handles signals or information. And this is the possibility that I will pursue in section 7. But, more importantly, I will attempt to start from an understanding of consciousness and work backwards from it. Often it seems that when we develop a theory that mental properties depend on one thing or another the theory ends up unable to explain consciousness, it simply ends up insisting that it is somehow identical to whatever functional properties are favored. So, in the hopes of avoiding this problem I will instead start from a criteria about what systems in general can be considered conscious, and then I will work back from that criterion to see what kind of materialist theory best describes those systems.