1: The Problem
How does a word have meaning? In the broadest possible terms there are basically three accounts of meaning. On one side we have the externalists, who say that meaning is a combination of internal states and what really exists in the world (the meaning of a word is dependant on what is physically out there). Then we have the conventionalists, notably Wittgenstein, who claim that the meaning of a word is defined completely by how we use it. Finally we have the internalists, who say that the meaning of a word is completely determined by our mental states, such that there are features of our minds that provide a meaning behind our words.
Here I will provide an internalist account of meaning and show how such an account can answer at least some of the objections of conventionalists (specifically Wittgenstein) and of externalists (specifically Putnam). Why is an internalist account of meaning preferable to an externalist account or a conventionalist account? Primarily I would reject externalism because it violates expectations of locality (expectations concerning the nature of our minds and the casual structure of the world). We think that what a word means should have an impact on our mental states (such that the meaning of a word may cause me to have one thought instead of another, for example I might have the thought “Santa Claus is fictional” in part because of the meaning of Santa Claus, if I thought Santa Claus meant Saint Nicolas I would not have had that thought). However for an externalist the meaning is partly determined by external objects, which would imply that these non-local and external objects somehow have a casual effect on our minds, which seems unlikely (especially when the objects question are fictional or situated far in the past). I would reject conventionalism because it does not explain (at least to my satisfaction) how our own private thoughts can be meaningful. Since I am not bound by conventions of use when I think to myself I may well invent new labels for situations or concepts I deal with often, which seem to have meaning to me; it is hard to defend how these private labels might have meaning in a conventionalist account (although attempts have been made). In any case internalism seems like a much simpler and elegant solution to these problems.
2: Mental Models
So what provides our internal meaning of a word? In short, mental models (if that seemed obvious to you it would be sensible to skip down to the objections section). A mental model is a collection of information, such as ideas, images, and properties, that we have associated with a specific concept (which may or may not correspond to a single word). What a word means to us is captured completely by this mental model, or so I claim.
2.1: Where Do Mental Models Come From?
Of course these mental models don’t simply appear in our minds. As we observe the world directly, or learn about it indirectly, we create mental models using this information. Our mental models are also subject to revision, for example a child’s mental model of Santa probably includes the idea that he is a real person. Later that mental model will be revised to include the idea that he is a fictional person instead. Initially words are associated with mental models purely by conjunction (your parents repeatedly show you an object an say a word, and you come to include those sounds in your mental model of the object as something that is associated with it). Later of course words can be picked up through use (in which we infer which mental model corresponds to the word), through a dictionary (which helps us either associate a new word with an existing mental model or helps us create a new mental model for the word), or by the creation of new words (in which we assign a new sound to a mental model that has none).
2.2: How Do Mental Models Reference One Another?
At this point you may have already formed the standard objection to internalist theories which claim that we have representations of objects in our minds. Consider the case where concept A is related to concept B and concept B is related to concept A, strongly enough that they should both be included in the mental models of each other. Does this create a mental loop in which the models must be (impossibly) infinite in size? No, I think this demonstrates that models are connected to other models through “pointers” (many thanks to David W Smith for this idea). For the sake of simplicity assume that each model has a number (corresponding to where it is stored let us say). If model A is associated with model B in some way it will not contain a reproduction of model B, but a “pointer” (the number associated with B) that allows the mind to unconsciously retrieve B when needed, so that to the conscious mind A and B are both transparently associated with each other, without the need for infinite mental storage.
2.3: What About Indexicals?
Of course not all words correspond to a simple unchanging model. Words like “I” and “here” cannot be associated in a fixed way with a single mental model. Of course there is no description of the meaning of words in which indexicals can remain fixed, so this is to be expected. For simple indexicals such as “I” and “here” it seems reasonable to suppose that we have a single mental model that we construct representing our self image (note that this is not the same as self-consciousness) and another we construct that contains information about our current location. More complicated are the indexicals such as “him” and “this”. It seems unreasonable that we are constantly constructing a new model for whatever has our interest; instead I would propose that we have constructed models for most people and objects in our environment (even if they are only pointers to pre-existing concepts, if you have seen one floor lamp you have seen them all). We then point “him”, “this”, ect at the model corresponding at whatever object has our attention, giving them a meaning until our attention moves to something else.
Perhaps it is best to illustrate this process with an example (stolen of course). Say that I have a mental model of a man, David W Smith, who among other things I know is the author of The Circle of Acquaintance. Now say that I come across a man in the hallway. I construct a mental model of this man, containing information about what he looks like, and associate an indexical such as “this man” with it. After I talk to him for some time I realize that the man really is David W Smith. This “feeling of insight” is a manifestation of the two mental models being merged into one, now containing the information about his works and opinions as well as the information about his physical presence in front of me. Before the words “David W Smith” and “this man” were associated with different mental model and meant something different to me; now they are associated with the same mental model and mean the same thing to me. This sort of phenomena is especially hard to capture with an externalist theory, but internalism (and perhaps conventionalism) handle it well.
How is language/communication possible? One of the most difficult aspects of an internalist theory of meaning is to explain the phenomena of language. Generally we are able to understand each other fairly well, and some would think that this shows that meaning is public (and thus conventional or external and not internal). I defend this account by claiming that in fact there is no such thing as a shared meaning of words. However the private meanings that we have for most words tend to coincide, although there are slight differences. Why would our internal meanings of words happen to coincide? Well the process by which we learn to associate new words with their meanings is designed to help everyone achieve the same meanings for words. Of course the process is imperfect and that is how a word’s meaning changes over time, and why people may disagree as to what a word means. On the other hand it is much harder in an externalist or conventionalist theory to explain how disagreement or changes in meaning are possible. In fact language can provide us with excellent reasons to think that meaning is internal. Take for example the sentence “Quid dues, o Musae, tam saeva incendia Teucris avertit?” Does this sentence have meaning? Only if you understand Latin. To those who can read Latin there is meaning, but for the most of us there is none. The only way an externalist can deal with this phenomena is to assume that the meaning is somehow there but hidden from most readers. However in this case what keeps us from saying that the sentence “Yethik, haglich deitrath boulathica lilitich” does not also have some meaning in a language that has been forgotten, has yet to be invented, or is spoken in another galaxy? And how did the meaning get into this sentence, because I clearly didn’t put it there? As I understand neither Latin nor this made up language it can’t have been my intention while writing down these sentences that gives one meaning and not the other.
2.5 Syntax and Semantics
Another motivation to adopt the mental models account, or an account like it, is to establish how syntax is different from semantics. It has been argued by some (for example here and here) that semantics is what distinguishes a conscious system, that means something when it talks to us, from an unconscious system, that can simply use words in a way that can fool us. Although I would argue that this isn’t all that is necessary for consciousness it may certainly be one of the factors. A conventionalist account however cannot make this distinction. If meaning is simply the ability to use a word properly than the unconscious system that is able to fool us has as much meaning behind its words as we do. However if you hold that the mental models account of meaning is accurate than we could demonstrate that the system was unable to form mental models, and thus could not understand words in the way a conscious person does.
Of course there are well known objections to this kind of view. A conventionalist objection, first brought up by Wittgenstein, is how then how do we understand a word such as “game”? Wittgenstein argued, successfully I think, that there was no set of properties that could be devised that would include all activities we call games and exclude the activities that are not games. If we simply believed that mental models were lists of properties this would indeed be a problem. However the information associated with a model is not restricted to dictionary like definitions (of the kind Wittgenstein was arguing against). Instead our mental model of the word game may include a collection of games that we treat as exemplars. We call an activity a game then when it is similar to one of the activities that are part of our mental model of game. Of course when we call something a game we don’t feel like we are comparing it against a set of exemplars. I would explain this away by claiming that the process of matching against exemplars, and probably most of how we tell when something fits into one of our mental models, is unconscious.
Another objection to an internalist account of meaning comes from the work of Putnam; the problem of Twin Earth. We assume that on Twin Earth the substance we call water has been replaced by a substance with nearly identical properties, which we will call XYZ, but that is in some discoverable way different. Now we compare residents of Earth and Twin Earth from pre-technical periods, before any possible differences between XYZ and H2O have been discovered. If we put them in a room they might both become thirsty and ask for a glass of water. However the person from Twin Earth really means a glass of XYZ and our Earth resident means a glass of H2O. Putnam claims that this difference shows that meaning is not localized in the speaker since they mean two different things, and these differences can only be explained by appealing to the external aspects of the world, and not their mental states, which are conceivably identical.
The problem with this argument for externalism is that it assumes meaning of a word is only one particular kind of thing; while I would maintain that the meaning of a word is whatever fits its mental model, which may be more than one kind of thing. For example we all think we have a fairly clear idea of what steel is; we know what we mean by steel. However let us say that sometime in the future we discover that what we have been calling steel is really two different substances (sometimes it is one substance steel-1 and sometimes it is steel-2, not a mixing of two substances). If we ask ourselves though “when I say steel do I mean steal-1 or steal-2?” the answer is really neither, we want “steel”, which is either steel-1 or steel-2, it doesn’t matter to us since we currently have no way of knowing the difference. The people in our thought experiment are in a similar situation, they don’t mean either XYZ or H2O, they mean water, and to them water is anything that will fit their mental model of water. To them XYZ and H2O are equally water, and thus they mean both of them. We are tripped up in the thought experiment because we know the difference, and to us water means one and not the other. Putnam however cannot run the argument with people who know about the difference between XYZ and H2O, because then although it is true that we mean one and not the other our mental states are different from the person on Twin Earth (due to the knowledge of the difference).
Of course to accept this answer you must also accept that meaning may change over time. Since we know more about water what we mean by water is different than people who lived many generations before us. Putnam would not accept this, because to him meaning is closely tied with the truth, and the truth doesn’t change with time. Yes it is correct that the truth of a statement is dependant on factors in the external world, but meaning of a statement and the truth of a statement are two different things.
Of course the fact that this account can stand up to objections does not necessarily invalidate a conventionalist or an externalist account. Really the account of meaning that has been described here is part of a larger picture, a search for an objective description of consciousness; you can’t describe consciousness if you can’t say how it is that our words and thoughts have meaning. Ideally the whole picture will be no just consistent but deduced from a small set of initial premises, at which point accepting the truth of the premises will guarantee the truth of the whole picture, removing the need to argue against the other explanations of meaning.