On Philosophy

January 21, 2007

Representation

Filed under: General Philosophy,Mind,Self — Peter @ 12:09 am

Some philosophers who study the mind propose that what makes the brain conscious is that it is self-representational. Personally I am inclined to agree with the idea that the brain represents itself, but not that this alone is responsible for consciousness. However, to properly address the questions about the possibility of self-representation, and whether the brain can represent at all, we must first develop a theory about representation.

But first consider this image:
infpicture.jpg
What does it represent? At first glance it seems to be representing itself, since the picture contains a copy of itself, including its representational features, within itself. However, since I made that image in a finite amount of time, it obviously can’t contain copies of itself “all the way down”, which would be an infinite number of copies. So maybe it doesn’t represent itself (or does it?), but certainly it represents some other picture, not the one we are currently presented with, that does perfectly represent itself. Even though it cannot contain an infinite number of copies of itself it represents an image that does. (Is the image represented by the image presented here, the one that is perfectly self-representational, conscious? I doubt it, which is why I doubt the theory that self-representation alone is responsible for consciousness.)

But, setting the above thought experiment aside for the moment, what does it mean to claim that one thing represents another? One easy move to make is to say that if something like a picture is representational it does so in virtue of giving us an idea that is about that thing. For example, under this interpretation of representation the picture above only represented a picture that was perfectly self-representational because when I saw it the idea of a picture that was perfectly self-representational came to mind. Sometimes this definition might be workable, but unfortunately not in the context of developing a theory about representation in order to understand the mind, as that would be viciously circular. It is also unable to describe situations in which there is a representation but no mind to interpret that representation. For example, consider two robots that wander around a room, creating a map of obstacles as they go. One of these robots can share their map with the other. But, since the robots don’t literally have maps, they shared data that represents the layout of the room. And we know that it must represent the room (versus being a meaningless collection of ones and zeros) because, when the second robot receives it, that robot is able to use the information to avoid obstacles it hadn’t encountered before. The theory of representation we are looking for needs to be general enough to encompass these cases as well.

Another possible definition of representation is in terms of causation. We might say that one thing represents another only if the two are related by the right kind of causal relationship. Of course this is simply another version of the causal theory, or causal functionalism (see yesterday), and has the same problems. For example, the picture that I used as an example in the very beginning was not caused by a picture that is perfectly self-representational, yet we all agree that is what it represents. Likewise, we might create realistic pictures of imaginary places and people, and these pictures would be understood to represent those places and people. Not some might respond to this by arguing that this is not the case, that the pictures represent our conceptions of those places and people. But that can’t be correct, because if we wanted to represent our idea of a fictional place or person we wouldn’t create a realistic picture, we would create a very stylized picture, in order to convey that the picture is about our personal relationship to the subject matter, and not representational of the subject matter alone (in fact a great deal of modern art seems to explore this idea).

So, now that I have argued against what I think are the unworkable theories about representation, allow me to present one that I think does work, one that is rooted in information theory. The “informational” theory of representation says that a thing A represents a thing B if information about B can be reliably gleaned from A. Or at least this is the simple version. Certainly this theory seems to cover the cases presented here. The picture that I have referred to so often represents a perfectly self-representational picture because we can see the pattern and thus it suggests to us a picture with that pattern extended infinitely. Likewise a picture of a fictional person or place might be said to represent that person or place because it gives us information about them (specifically what they look like), even if there is no real object. To be a completely accurate account though we must relativize the representation to the system that is receiving it. Different systems will deduce different facts from the same item, and thus the same item will represent different things to them. For example, for most of us a smooth rock is simply a smooth rock. However, for people who are well informed about geology and erosion a smooth rock will give them information about the water that brought it to that shape. Similarly, the picture at the top only represents a perfectly self-representational picture because we know how to extend the pattern in our minds, it would not represent a perfectly self-representational picture to a system that could not extend the pattern, or would not conclude that the pattern was incomplete.

And with the informational theory of representation we are ready to tackle the problem of self-representation in general. In fact we can address the more difficult form of the question, which is to ask how a system can represent itself to itself. Under the informational theory the answer is relatively straightforward. Assume the system contains blocks of information that are dealt with as whole units (that is to say that if anything is to represent the system itself it must be one of these blocks). And one of these blocks would represent the system if the system could reliably determine information about itself from one of these blocks. This isn’t hard to imagine, for example the system might be designed to store its current status in an area, including information such as how much storage capacity remains, ect. But how can we tell if the system is deducing information about itself from this raw data or if the data is simply being stored? Well, assuming we have full knowledge about how it works, we can assume that if the system uses that self-information to make decisions (not necessarily behavioral decisions), decisions that make sense on the basis of that information (for example, when the system is low on free space it might decide to delete some data), that it has deduced the relevant facts.

Now there are two notable features of this account. One is that what is being represented depends on a system “knowing” that information (being able to use it), which in turn depends on how the system changes over time (that is how it reacts to information). The second is that self-representation no longer explains the mystery of consciousness, at least not obviously. We would still have to say what kind of systems that were self-representational to themselves would count as conscious, and why.

Advertisements

15 Comments

  1. All of the theories of representation you presented require an interpreter to be potentially able to look at the representational object and figure out what it means. If the representational object represents the interpreter’s interpretation process… Hmm, I’m still not sure if that can work or not.

    I’m worried that your definition is overly general, and now everything represents everything else, since representation is just a matter of being able to draw inferences from what’s out there, and for a Laplacean demon, just looking at the world now is enough to predict everything that has and will happen. So the state of the universe now is a representation of every other state that the universe will have? You can argue that this doesn’t count as a representation, because there’s too much information there to be put to work by us, but actually, since the universe does use its current state to create its next state, the information is being interpreted!

    That’s sounds a lot closer to pantheism than I imagine your comfortable with…

    I’m still non-committal about it, but somewhat concerned.

    Comment by Carl — January 21, 2007 @ 1:29 am

  2. “now everything represents everything else”, not not everything can be deduced from any one thing, in fact everything can only be deduced from everything. And secondly what is deducible from what depends on the system we are considering.

    “the state of the universe now is a representation of every other state that the universe will have” representation to who, what system is infering facts from the entirety of the universe’s state? To such a being, able to deduce future states (which would require awesome computational power) then sure. But such beings cant actually exist in this universe. (because then they could perfectly predict themselves, and no system can perfectly predict itself.)

    Comment by Peter — January 21, 2007 @ 2:33 am

  3. I argued that unless we can say that the universe itself cannot be considered a being (which begs the question) then the universe itself is the being which interprets its current state in order to arrive at its successive state.

    Here’s another way of explaining my problem:

    1. To be considered conscious, a thing needs to be self-represented and self-interpreting of that representation.

    2. To be represented is to be able to be interpreted by something.

    3. To be interpreted by something means some (conscious?) system makes an inference-generating model by means of a representation.

    So, to be conscious, we need to be interpreters of the consciousness our neurons are representing, but the interpretation needs to be done by consciousness?

    Comment by Carl — January 21, 2007 @ 4:57 am

  4. The universe could only be said to be deducing information if it made choices or reacted in some other way on the basis of that information. It doesn’t. Therefore it it can’t be said to be deducing anything. But a system can deduce facts and act on them without being conscious (see the example of the robots with maps). Thus your dilemna doesn’t go forward, because there is no need for consciousness in order to have a deduction of facts, and therefore representation can be defined without appeal to consciousness.

    Also you seem to be getting confused a bit in your formulations, in the language I am employing here your argument would be:
    1. To be considered conscious a system must represent itself (meaning it can reliably deduce information about itself)
    2. For one thing to represent another to some system that system must be able to deduce information about the latter from the former
    3. The ability to deduce information is evidenced by making choices (in behavior and/or internal state) appropriately in response to that information.

    Comment by Peter — January 21, 2007 @ 5:15 am

  5. I like your definition of interpreting as using a representation to make a decision, but what does it mean to decide? The universe takes its current state as the basis for subsequent state. Why can’t we call that a “decision,” besides the fact that it feels a little anthropomorphic?

    Comment by Carl — January 21, 2007 @ 5:06 pm

  6. They key is that having information is evidenced by making appropriate choices. For example a robot shows to us that it has information about the position of a wall by avoiding it. Internally there is a process that takes that information and uses it. We wouldn’t say the universe is deciding because the universe doesn’t get to one state from the next by a process that uses information. Yes, if the universe is changes then future states are different, but these differences do not reflect a plan, or a goal, or any kind of purposeful use of information, and as far as we know the universe is incapable of doing so.

    Comment by Peter — January 21, 2007 @ 5:22 pm

  7. So, the universe cannot be considered as “deciding” because it does not have a goal/intentionality/teleology. But isn’t that begging the question? Consider a single electron floating through space versus an amoeba in a dish versus a robot versus a person: We all say the person has intentionality; a lot of people say a sufficiently complex robot has goals; some might say the amoeba has a “goal” of eating a reproducing; very few will say the electron has a “goal” of behaving in an electron-like fashion. However, an electron taking in the information it’s given and behaving in the appropriate way based on the field strengths of its present location or whatever seems to be analogous to an amoeba noticing a chemical in the broth and moving towards it to eat it, a robot getting data from its sensors and then acting on it, and a person noticing something then doing it. In each case, we can’t point to something physical and say, “This is the intentional part,” because intentionality is something that we project onto the object as a way of explaining to ourselves what the object will do. So, it seems odd that I can say, “An amoeba is acting intentionally when the presence of a chemical causes it to move in a certain direction,” but I can’t say, “An electron is acting intentionally when the presence of a field causes it to move in a certain direction.”

    Comment by Carl — January 21, 2007 @ 7:35 pm

  8. Not because it doesn’t have a goal, because it can’t be said in any meaningful way to choose. Electrons do not take in information and “decide” to beahave, electrons are controlled by natural law, natural law is not controled by electrons.

    Comment by Peter — January 21, 2007 @ 8:10 pm

  9. Are you saying the amoeba aren’t controlled by chemistry, robots by their programming, humans by scientific laws?

    Comment by Carl — January 21, 2007 @ 8:46 pm

  10. Yes, of course, but in their case you can legitimately say that their complex internal state, a physical cause, was responsbile for their actions. Electrons have no internal state, the cause of an electron’s behavior can’t seriously be attributed to the electron. And even some systems with a very limited internal state can’t be said to choose, if that internal state doesn’t play a sigfinificant part in thier future actions and internal state.

    Comment by Peter — January 21, 2007 @ 8:52 pm

  11. I think electrons can be said to have some internal state — their position/momentum, that they’re electrons as opposed to whatever other subatomic particles, their energy, etc. But whatever.

    Intentionality seems to be just another means by which to arrive at the same prediction. More complex things can be predicted in less specific terms by ignoring some parts of it. For example, heat theory ignores the motions of individual atoms, but the predictions made by examining each atom would be slightly more specific. I can vaguely predict what you’ll do if I know your motivations, but if I knew all your atoms’ positions and momentums, I could make an even more detailed prediction.

    Comment by Carl — January 21, 2007 @ 9:45 pm

  12. Who said anything about intentionality?

    Comment by Peter — January 21, 2007 @ 10:07 pm

  13. I guess I shouldn’t have said intentionality. I mean any meta-explanation, like a “decision.” For example, we might say, “If I program if (x==1) { doSomething(); }, the computer decides whether to do something or not by evaluating the value of x.” Or we can describe the exact same situation as, “The electrons within the CPU are affected by the electrical charges within it such that their motions are blah, blah, blah…”

    Comment by Carl — January 21, 2007 @ 10:48 pm

  14. But that long explanation would be the same thing, i.e. describing a system that decides.

    Comment by Peter — January 22, 2007 @ 12:10 am

  15. In his last two blog posts, Scott Adams (the writer of Dilbert and middlebrow autodidact philosophizer) has hit on a Spinoza-like model of the universe in which he asks why we can’t say that the Big Bang is intelligent, given that it has caused the emergence of eg. your blog, the internet, and everything else humans have done that is considered proof of our intelligence. His model is teetering on the edge of pantheism but for the vigilance of his Mensa crowd “rebellious” atheism.

    I don’t really agree with him, but I do think it’s funny that he’s hit on the same sorts of questions that I’m asking about consciousness and representation, only he’s calling the property in question intelligence instead. Adams is a smart guy, but he’s also so smart that it makes him over cocky, so he suffers the usual problems endemic to those whose philosophy hasn’t been tempered by time in the academy.

    Anyhow, it might be something good for you to write a rebuttal to.

    Comment by Carl — February 1, 2007 @ 2:32 am


RSS feed for comments on this post.

Blog at WordPress.com.

%d bloggers like this: