On Philosophy

January 21, 2007

Representation

Filed under: General Philosophy,Mind,Self — Peter @ 12:09 am

Some philosophers who study the mind propose that what makes the brain conscious is that it is self-representational. Personally I am inclined to agree with the idea that the brain represents itself, but not that this alone is responsible for consciousness. However, to properly address the questions about the possibility of self-representation, and whether the brain can represent at all, we must first develop a theory about representation.

But first consider this image:
infpicture.jpg
What does it represent? At first glance it seems to be representing itself, since the picture contains a copy of itself, including its representational features, within itself. However, since I made that image in a finite amount of time, it obviously can’t contain copies of itself “all the way down”, which would be an infinite number of copies. So maybe it doesn’t represent itself (or does it?), but certainly it represents some other picture, not the one we are currently presented with, that does perfectly represent itself. Even though it cannot contain an infinite number of copies of itself it represents an image that does. (Is the image represented by the image presented here, the one that is perfectly self-representational, conscious? I doubt it, which is why I doubt the theory that self-representation alone is responsible for consciousness.)

But, setting the above thought experiment aside for the moment, what does it mean to claim that one thing represents another? One easy move to make is to say that if something like a picture is representational it does so in virtue of giving us an idea that is about that thing. For example, under this interpretation of representation the picture above only represented a picture that was perfectly self-representational because when I saw it the idea of a picture that was perfectly self-representational came to mind. Sometimes this definition might be workable, but unfortunately not in the context of developing a theory about representation in order to understand the mind, as that would be viciously circular. It is also unable to describe situations in which there is a representation but no mind to interpret that representation. For example, consider two robots that wander around a room, creating a map of obstacles as they go. One of these robots can share their map with the other. But, since the robots don’t literally have maps, they shared data that represents the layout of the room. And we know that it must represent the room (versus being a meaningless collection of ones and zeros) because, when the second robot receives it, that robot is able to use the information to avoid obstacles it hadn’t encountered before. The theory of representation we are looking for needs to be general enough to encompass these cases as well.

Another possible definition of representation is in terms of causation. We might say that one thing represents another only if the two are related by the right kind of causal relationship. Of course this is simply another version of the causal theory, or causal functionalism (see yesterday), and has the same problems. For example, the picture that I used as an example in the very beginning was not caused by a picture that is perfectly self-representational, yet we all agree that is what it represents. Likewise, we might create realistic pictures of imaginary places and people, and these pictures would be understood to represent those places and people. Not some might respond to this by arguing that this is not the case, that the pictures represent our conceptions of those places and people. But that can’t be correct, because if we wanted to represent our idea of a fictional place or person we wouldn’t create a realistic picture, we would create a very stylized picture, in order to convey that the picture is about our personal relationship to the subject matter, and not representational of the subject matter alone (in fact a great deal of modern art seems to explore this idea).

So, now that I have argued against what I think are the unworkable theories about representation, allow me to present one that I think does work, one that is rooted in information theory. The “informational” theory of representation says that a thing A represents a thing B if information about B can be reliably gleaned from A. Or at least this is the simple version. Certainly this theory seems to cover the cases presented here. The picture that I have referred to so often represents a perfectly self-representational picture because we can see the pattern and thus it suggests to us a picture with that pattern extended infinitely. Likewise a picture of a fictional person or place might be said to represent that person or place because it gives us information about them (specifically what they look like), even if there is no real object. To be a completely accurate account though we must relativize the representation to the system that is receiving it. Different systems will deduce different facts from the same item, and thus the same item will represent different things to them. For example, for most of us a smooth rock is simply a smooth rock. However, for people who are well informed about geology and erosion a smooth rock will give them information about the water that brought it to that shape. Similarly, the picture at the top only represents a perfectly self-representational picture because we know how to extend the pattern in our minds, it would not represent a perfectly self-representational picture to a system that could not extend the pattern, or would not conclude that the pattern was incomplete.

And with the informational theory of representation we are ready to tackle the problem of self-representation in general. In fact we can address the more difficult form of the question, which is to ask how a system can represent itself to itself. Under the informational theory the answer is relatively straightforward. Assume the system contains blocks of information that are dealt with as whole units (that is to say that if anything is to represent the system itself it must be one of these blocks). And one of these blocks would represent the system if the system could reliably determine information about itself from one of these blocks. This isn’t hard to imagine, for example the system might be designed to store its current status in an area, including information such as how much storage capacity remains, ect. But how can we tell if the system is deducing information about itself from this raw data or if the data is simply being stored? Well, assuming we have full knowledge about how it works, we can assume that if the system uses that self-information to make decisions (not necessarily behavioral decisions), decisions that make sense on the basis of that information (for example, when the system is low on free space it might decide to delete some data), that it has deduced the relevant facts.

Now there are two notable features of this account. One is that what is being represented depends on a system “knowing” that information (being able to use it), which in turn depends on how the system changes over time (that is how it reacts to information). The second is that self-representation no longer explains the mystery of consciousness, at least not obviously. We would still have to say what kind of systems that were self-representational to themselves would count as conscious, and why.

Advertisements

Create a free website or blog at WordPress.com.