On Philosophy

September 27, 2008

Addendum: First Order Logic Without Objects

Filed under: Logic,Metaphysics — Peter @ 10:43 am

One argument for keeping the properties + objects model around (versus the bundle of properties model) is that it is integrated into first order logic. However, it is relatively simple to construct a logical system with the same expressive power as first order logic without that model.

The system is as follows:

All rules of syntax are the same as first order logic, except that instead of atomic formulae of the form Px we instead have property bundles of the form {A, B, C, D …}. And quantifiers range over properties; i.e. members of bundles. Later we may treat these bundles in the same way as sets (using some of the same notation), and like sets their members should be considered unordered. However, unlike sets, the expression of a property bundle of the form {A, B, C} does not mean that it is necessarily limited to those three properties; it may contain additional, unlisted, properties. If necessary a property bundle can be named with a subscript, such as {A, B}a.

The following rules of inference are also permitted (in addition to the usual ones)
Xa → Ya, where Y ⊆ X
P1 = P2 → ({P1, A, B, …}a → {P2, A, B, …}a)
Xa & Ya → X∪Ya (although this isn’t very useful until the second-order version of the system)

This has the same expressive power as first order logic, which can be demonstrated as follows:
Anything that can be expressed in first order logic can be expressed within this system via property bundle pairs (of the form {A, B}), and turn both designations for properties and objects under the old systems into properties under the new system. For example, ∀x(Fx → Px) becomes ∀x({F, x} → {P, x}). Likewise anything expressible under the new system can be expressed in the old system by treating each property as an object and taking being a member of a particular property bundle to be a unique predicate. For example ∀x({x, A, B}y → {x, C}z) becomes ∀x((Yx & Ya & Yb) → (Zx & Zc)).

Of course the system, as described, can’t express some ideas, such as the idea that if a property bundle has a certain property that it must also have a second property. ∀x({F, x} → {P, x}) only expresses the idea that if a property bundle with F and x exists that a second with P and x must also exist, not that the first bundle contains F, P, and x. A partial solution is to designate some properties as uniquely identifying a particular “object” (the property of being a particular person, for example), such that they exist only in a single bundle because, in a way, they describe the bundle itself. A stop-gap way to express such properties is to say that, for them, the following rule of inference holds:
({z, A1,1, A1,2, …} & {z, A2,1, A2,2, …} & … & {z, An,1, An,2, …}) → {z, A1,1, A1,2, …, A2,1, A2,2, …, …, An,1, An,2, …}, where z is a unique identifier
However, this simply guarantees that there is some bundle with all the properties associated with z, which is not quite what we wanted.

To really express the idea (as well as dependant properties, and other such things), we need to move to a second order logic, one where bundles can be quantified over as well (this is why I introduced the idea of naming them previously.)
Then we could define a unique identifier as follows:
∀x∀y({u}x & {u}y → ∀z({z}x ↔ {z}y)), making u a unique identifier
Also we could express the idea that property Y depends on property X (i.e. any bundle that has Y must have X) as follows:
∀x({Y}x → {X}x)

December 18, 2007

The Execution

Filed under: Logic — Peter @ 12:00 am

Once there was a prisoner who was condemned to die within the month. However, because of the nature of his crimes, the king, who never lied, didn’t feel this was sufficient punishment, and told the prisoner that they would only execute him on a day when he didn’t know that he would be executed, with the intent of keeping him in a state of suspense. Knowing that the prisoner reasoned as follows: clearly I can’t be killed on the last day of the month, because given that I know that I will be killed within the month I would know that I would be executed on that day, and thus, since the king never lies, I cannot be killed on that day. But neither can I be killed on the second to last day of the month, because I know that I can’t be killed on the next day, the last day, as I have just established. Thus the same reasoning that I applied to the last day applies to the second to last day as well. And in this way I can deduce that it is impossible for me to be killed on any day of the month. So the prisoner concluded that the king’s statement entailed that he wasn’t to be executed after all. Despite his deduction the king kept his word, and five days later the prisoner was executed, to his great surprise.

The puzzle then is to uncover how the prisoner’s reasoning is in error, because clearly it must be in error given that it is obvious that the prisoner can still be executed and that he won’t expect it. To do that I think we first need to put into formal terms the prisoner’s reasoning, to see if it really is as gap-free as it initially appears, or whether there is some subtle non-logical inference that constitutes an error in the prisoner’s reasoning. Let’s begin then by naming the primitive propositions.
ex will express the claim that the prisoner is executed on day x
dx will reflect the claim that the prisoner is alive on day x
Obviously then dx ↔ ~e1 ∧ ~e2 ∧ … ∧ ~ex-1, or, in other words, that if the prisoner is alive on a particular day that he hasn’t been executed on any of the previous days and that he doesn’t die of natural causes. I will refer to this as proposition (a).
Additionally e1 ∧ e2 ∧ … ∧ en reflects the fact that the prisoner will be executed some time within the next n days. I will refer to this as proposition (b).
This reflects the basic facts of the prisoner’s condition, but we haven’t captured the idea that the prisoner can know a particular fact or the king’s claim that the prisoner won’t be executed on a day that he knows he will be executed.
We can simplistically capture the claim that some proposition φ is known by asserting K(φ).
And since we are confined to logical deduction in this situation we can capture the idea that if the prisoner can deduce some fact then he knows it by asserting that φ → K(φ), which means that upon concluding φ that we can conclude that φ is known. I will refer to this as proposition (c).
The king’s assertion thus becomes K(ex) → ~ex. I will refer to this as proposition (d).

The prisoner’s reasoning is thus as follows:
suppose dn
thus ~e1 ∧ ~e2 ∧ … ∧ ~en-1 by proposition (a)
thus en by proposition (b)
thus K(en) by proposition (c)
thus ~en by proposition (d)
and now we can discharge our assumption, concluding:
dn → ~en

suppose then that dn-1
thus ~e1 ∧ ~e2 ∧ … ∧ ~en-2 by proposition (a)
thus en-1 ∨ en by proposition (b)
suppose that ~en-1
then dn by proposition (a)
but dn → ~en as established above
thus ~en
but this contradicts proposition (b)
therefore we can discharge our assumption and conclude that:
en-1
thus K(en-1) by proposition (c)
thus ~en-1 by proposition (d)
and now we can discharge our initial assumption, concluding:
dn-1 → ~en-1

And this deduction can be carried on until we can conclude that on every day that the prisoner can’t be executed. And obviously this contradicts proposition (b), that the prisoner will be executed on some day. Unfortunately this reasoning cannot be extended to ω or more days, because it result in an infinite length proof and thus we would be unable to actually conclude anything about days 1, 2, 3, and so on. Unless, that is, we allow meta-theoretic proof by induction, where we conclude that since dω → ~eω is provable and that dx-1 → ~ex-1 is provable if dx-1 → ~ex-1 is that for all y dy → ~ey holds. But of course that has nothing whatsoever to do with the topic at hand.

So, to get back to the main issue, it appears that the prisoner’s reasoning was flawless after all. But, on the other hand, it does contain contradictions, not only as a whole, but in the idea that we can conclude that something is not the case after previously concluding it to be the case. Simply imagine the prisoner’s situation if he happened to live to the last day. Given that it was the last day and he knows he must be killed on some day he can deduce that he will be killed on that day. But, given the king’s assertion, he can also conclude that he won’t be killed that day. Thus he is able to conclude both a proposition and its negation, which seems absurd. And yet all the premises are true, because, given that he believes the king, he must conclude that he won’t be executed. Therefore if the king does come to execute him on the last day he clearly won’t know about it before hand, because he has concluded that he is safe. Perhaps then the “solution” is to say that nothing can be known when contradictions can be derived, although obviously this is not something we can express in the formalism detailed above, because it is clearly incapable of discussing itself meta-theoretically.

But that opens up another problem. How should we reason if we find ourselves in the role of the prisoner (in the sense of entertaining something apparently true from an external perspective but which gives rise to contradictions when we reason about it, not in the sense of being condemned to death)? Should we just embrace the contradictions and thus conclude that everything is the case as classical logic would imply? Clearly that seems absurd. Obviously one way out is to reject the claim that if we can conclude something deductively that we necessarily know it. And that is probably correct, as the case discussed here shows. But the problem can still arise if we are told that, for example, we won’t be executed on any day where we can deduce without contradiction that we will be executed. There are probably many ways of overcoming this problem, but allow me to put forward an idea that I have mentioned on several occasions already, that certain predicates which appear classical at first glance may really be partial truth predicates (partial functions, often invoked in discussions of computability, that yield truth values). The case we are considering then is simply one where that predicate doesn’t yield a value for, given that if we assume it to yield a value contradictions arise (we can’t assume that it yields false, that the fact we will be executed can’t be deduced, since we have just deduced it, but neither can we assume that it yields true since if it did then contradictions would arise, contradicting the claim that we could deduce that fact without contradiction). And thus no contradictions arise, and we can reason classically without coming to the conclusion that everything is the case. (I would also “cancel out” the kings assertion, meaning that we would reason the same way about when we would be executed no matter whether he makes the assertion or not. Which is more evidence that this is a satisfactory way of approaching the problem, since the king’s assertion doesn’t actually add more information from which we might reach new conclusions.)

December 14, 2007

The Logic Of Language

Filed under: Language,Logic — Peter @ 12:00 am

It’s not hard to generate apparent paradoxes by applying logical rules to ordinary uses of language. One famous example of this is the liar, the sentence that says “this sentence is false” (which, if true, is false, and vice versa, and which thus can be neither true nor false, apparently). Another, slightly less famous, example is considering whether the present king of France is bald or not bald. Clearly everything is either bald or not-bald, but the present king of France, as a non-existent entity, can be asserted to be neither. What do such paradoxes tell us? Some have taken them to imply that ordinary language is logically inconsistent. Certainly it might be, however I don’t see much evidence of such inconsistency. Inconsistency, in the kinds of logic usually adopted to determine the implications of these natural language expressions, is a problem because a single inconsistency would imply that every statement was true. But, despite these “inconsistencies”, we have no trouble actually reasoning. It is not the case that, upon coming across the liar, we subsequently become convinced that every statement is true or follows from it. Indeed most people would deny that anything is true as a logical consequence of the liar. This strongly implies that, whatever is going on, language is not inconsistent.

If we don’t want to simply throw our hands up and admit that language is inconsistent there are two possible ways to solve these problems. One is to hold that what is expressed logically by a sentence is more complicated than it seems. This is the solution commonly given to the linguistic dilemma posed by “the present king of France”, as described by Russell. Instead of taking it to be an object to which properties might apply such constructions are taken instead to be a shorthand, expressing something like “there is a single object x such that x is the present king of France and x is bald”. This solves the “paradox” by allowing both assertions that the present king of France is bald and that he is not bald to be false, because they are no longer assert something of the form Bx or ~Bx, of which one must always be true. And a similar solution could be conceivably be given for the liar, where the expression “this sentence” is taken to be a shorthand for something more logically complex. Such solutions may seem appealing initially, but they have their drawbacks. For example, consider reasoning such as “if the wall is less than 5ft high I won’t get hurt, if the wall is not less than 5ft high I won’t get hurt (because of some safety device), therefore, because the wall is either less than 5ft high or not less than 5ft high I won’t get hurt”. This reasoning appears sound, but it simply can’t work if we expand the definite description “the wall” as Russell would have us, because then it wouldn’t necessarily be the case that “the wall is either less than 5ft high or not less than 5ft high”, as both assertions may turn out to be false if the description can apply to more than one object or to no objects. Thus to actually derive the conclusion we must also be entertaining the premise that “there is exactly one object that satisfies the definite description ‘the wall’”. Since we clearly don’t entertain such assumptions when we reason (at least we don’t consciously consider them) then our reasoning must be flawed, that even if the premises we are considering are all true the conclusion may still be false because of the premises we didn’t consider. Which is to ask us to reason differently despite the obvious fact that reasoning as we do works perfectly well.

The other solution is to assume that the logical content of these sentences really is as it appears but that the logical rules of deduction we use to generate the paradoxes are defective, and that the proper rules should more closely reflect how we actually arrive at conclusions, which apparently don’t give rise to contradictions. One promising way of doing that might be to admit partial truth functions. We can think of a partial truth function as an algorithm that operates on objects which, if it yields a value, yields either true or false. However there is also the possibility that the algorithm won’t halt and will fail to produce a truth-value. And this does not effectively introduce a third truth value into the system for reasons that are somewhat complex, but which I can describe in a quick way as stemming from the fact that it is in principle impossible to say whether a partial truth function that hasn’t produced a value will or won’t produce a value at some point in the future. And thus there is no way to talk generally about the function not producing an output. (Or, in other words, the halting problem has no solution.) We could apply this to solve the “paradoxes” mentioned by simply asserting that “is a true sentence” fails to yield a result in certain cases of self-reference, and that most predicates don’t yield a value when applied to non-existent objects. One way to interpret what that means, in ordinary terms, is as a failure to assert, that saying that the self-referential liar sentence is true or false doesn’t really assert anything, nor does attributing properties to things that don’t exist. Of course that still means that when we reason in a normal fashion that we are presupposing facts such as “it is possible to assert things of ‘the wall’”, but that seems like a kind of presupposition that we might actually have in mind, or might endorse our commitment to as a byproduct of actually asserting things about it.

Setting aside how we might fix the problems we encounter when we combine logic and language let us take another look at what the fact that language our reasoning using it actually works indicates. All we can say, without presupposing more than we should, is that evolution knows best. Evolution has produced in people a capability to reason effectively that allows them to correctly deduce facts about the things that they encounter. And, naturally, we can check this capacity of ours for errors by simply examining whether the deductions we make about those things are correct, since they are about things we interact with. But, obviously, evolution doesn’t care about our ability to reason about fictional entities or semantical facts, so long as thinking about them doesn’t interfere with our ability to reason practically. This is why I think it is a bad idea to generalize from logical laws that seem true enough, such as the law of the excluded middle. Sure the law of the excluded middle seems as certain as anything can be, since it holds in every case that we have ever encountered, and thus evolution has led to our minds incorporating it at a very low level, as, given that evolution cares only about the cases we might encounter, there is no reason its validity in other cases to matter in that respect. But why assume, given those facts, that it must hold for non-existent entities as well? Certainly we can’t imagine them in any way except as obeying the law of excluded middle, but that could be simply because of limits on our imagination when thinking about what is not the case. Indeed I can’t even think of a way to settle the matter, even in principle, because given that non-existent things don’t exist there is not way to check. And the same holds for semantic facts in the context of self-referential sentences, because there is no way to check the liar sentence to see whether it is really true or not. Indeed this might even be an argument that we should leave such cases complete aside, after all the fact that we can’t confirm any possibilities regarding them strongly implies that what we say in those cases doesn’t really matter in any significant sense, and thus we can safely ignore them, so long as we firmly resolve to also ignore any similar cases in the future that threaten to give rise to logical paradoxes in language.

So what I am saying then is that, if we must pick some way to deal with such cases, that the choice we make is, to a large extent, arbitrary, so long as it doesn’t interfere with how reason works in normal cases. Which is why I lean towards treating our predicates as partial truth functions, because our normal reasoning is recovered under the simple assumption that those predicates actually have values in the cases under consideration. But no matter how we decide to deal with these cases, or not deal with them, I think there is one position we are certainly justified in resisting, namely the idea that language is in need of repair because of these apparent paradoxes. To endorse this view is seemingly to prefer some particular logic over language, and thus to think that language is deficient so far as it does not conform to that logic. But I think it is easier to argue the opposite, that language and our reasoning with it is extremely productive when it comes to deducing conclusions about the real world (versus artificial mathematical situations), and thus that as far as that logic isn’t a satisfactory model for language and linguistic reasoning, given the contradictions that it taking it as such would give rise to, that it is deficient. Because, while simple, logic is also unnatural and lacks expressive power compared to natural language. Since the claimed inconsistency in natural language doesn’t actually trouble us there is no need to give up its expressive power for a solution to what is essentially a non-problem.

December 5, 2007

What If Arithmetic Was Inconsistent?

Filed under: Logic — Peter @ 12:00 am

Considering the consequences of the inconsistency of arithmetic may seem to be an enterprise of questionable value. After all, don’t we know that arithmetic is consistent? Well it turns out that we don’t. And not only are we unable to demonstrate the consistency of arithmetic but it has been proven that such a demonstration is essentially impossible. Nor has anyone proven that arithmetic isn’t inconsistent, because obviously being able to do that would amount to a consistency proof. But the possibility of proving or demonstrating the inconsistency of arithmetic remains an open possibility. This asymmetry makes the consistency of arithmetic seem at least questionable when we think about it; if it had been proven that nothing could be conclusively said about the consistency or inconsistency of arithmetic in one breath then perhaps our unease could be set to rest, but as things we are guaranteed that either the possibility that arithmetic is inconsistent will always be open or that it will be shown definitively to be inconsistent. In such a situation isn’t it at least reasonable to entertain that possibility?

Of course it might be objected that the consistency of arithmetic can be proven in other systems, and that such proofs are reason to believe in the consistency of arithmetic, even though a proof of that consistency is impossible within arithmetic itself. But to me this simply seems to beg the question, at best. It is equally well known that the problem with the consistency of arithmetic arises for any system of equal expressive power, which includes all those in which a consistency proof for arithmetic could be given. Suppose then that Z proves the consistency of arithmetic. It is then an open question whether Z is consistent. And if Y proves Z consistent than Y’s consistency is an open question, and so on. If it was the case that Z is in fact inconsistent then obviously Z could prove anything, including arithmetic’s inconsistency as well as its consistency. Additionally, if any of these systems are actually inconsistent then the inconsistencies will likely arise in certain applications of induction, of the kind that are needed to construct these very consistency proofs. And so the original proof of the consistency of arithmetic from Z might very well exhibit the “error” in Z that makes it inconsistent, meaning that, again, whether we take the proof from Z to mean anything depends to a large extent in our confidence in Z.

Another natural objection to make to this idea is that we can simply observe that arithmetic works, that we don’t get inconsistent results when we add or subtract, and that in the uses we put arithmetic to it doesn’t seem to have failed us yet. But this is an odd kind of argument to give for the idea that arithmetic is consistent. It is not necessarily the case that every inconsistent system must demonstrate its inconsistency immediately. Consider, for example, someone who held both that the arithmetical axioms were true and that their calculator accurately reflected arithmetical operations. Taken in conjunction these beliefs form as system which we know is inconsistent; calculators can only work with numbers up to a certain size and precision, and thus if we take the overflow values literally as the result of certain operations we can produce arithmetical identities that express contradictions. And yet this person may never come to realize such contradictions exist, assuming they use their calculator only on real problems and don’t immediately try to break it by giving it numbers that are too large as I did when I was a child. Indeed all the practical applications of arithmetic that are supposed to justify its consistency could easily have been carried through by our fictitious person and their calculator, assuming it could handle certain large numbers, a system that we know to be inconsistent. The moral is that inconsistency does not necessarily cause a formal system to “explode”, it may lie hidden because, being finite beings, all of our investigations are confined to some finite area that the inconsistency may lie hidden outside. For similar reasons it can’t be argued that the existence of the universe somehow proves that arithmetic, or any other mathematical system, must be consistent. It is true that if we can exhibit a model of some system then it must be consistent. The universe, however, is finite, or at least we only know of a finite universe. Thus the universe can, at best, model a finite set of mathematical truths. But such finite mathematical systems are not the problem, all the problematic cases, arithmetic included, are problematic specifically because an infinite number of unique formulas might be proven. And thus the existence of the universe demonstrates that, at best, only some finite subset of those systems is consistent. At least that is what I would say were I to consider that possible defense seriously, but it seems to have some serious flaws all on its own regardless of the size of the universe. Mathematical truths are used to model the universe, but the universe is not compelled to model mathematical truths. The universe simply exists, as a collection of independent facts, and thus talking about its consistency is, to an extent, meaningless.

Of course by suggesting that arithmetic might be inconsistent I don’t mean to suggest that somehow it might come apart with sufficiently large but still finite numbers, as my analogy with the calculator might have suggested. Certainly it may, since I haven’t personally witnessed all possible calculations. But the most suggestive possibility seems to be related to the Gödel sentence, a key part of the proof showing that consistency proofs for arithmetic are impossible, which states that there is no proof of that very sentence. Often we intuitively understand this sentence as true, and thus as expressing the consistency of arithmetic while at the same time being without provable. But of course, since it is not provable, it is also consistent to take its negation, which says that there is a proof of the affirmative version, as true. I don’t know what such a system looks like, since the negation of the Gödel sentence seems to contradict itself, but there must be some such system in which there is no real contradiction (otherwise we would have a proof by contradiction of the inconsistency of normal arithmetic). However, in trying to devise a consistent system including its negation we might stumble upon the idea of a class of infinite numbers expressing infinite length proofs that behave in odd ways in order to avoid the apparent contradiction. But the simplest way to take these infinite proofs as being actually producible and thus genuinely giving rise to contradictions. This gives us an interesting way for arithmetic to be inconsistent, because the inconsistencies can only be revealed by infinite length proofs. And obviously we will never actually come across an infinite length proof, which is perfectly consistent with our experience of arithmetic.

Suppose then that arithmetic, or mathematics in general, was inconsistent. What ought we to do in such a situation? One reaction might be to restrict ourselves to domains in which we know no contradictions will arise. Of course that might mean simply less powerful mathematical systems, but it could also mean restricting ourselves in the way we use these mathematical systems such that their inherent inconsistency won’t bubble to the surface. If contradictions only arise as a result of infinite length proofs, as has been suggested above, then obviously all we need to do is commit ourselves to never actually constructing one of those, which isn’t too hard of a task. But, more generally, we might find a way to establish for any given starting point that any contradictions that exist must lie a certain number of deductive steps away from it. And then we could simply restrict ourselves to staying within that bound when we construct our deductions, which should give us all the power we need, for all practical purposes. We also might realize that, again for all practical purposes, whether mathematics is inconsistent doesn’t really matter. All we need is mathematical models that we can put into correspondence with the world and use in certain limited ways to make calculations about it. An inconsistency that actually arises, from this point of view, is just one more way that the model can be defective, prompting a replacement. Although the model might give rise to inconsistent conclusions were we to deduce things from it that is simply not how we generally use mathematical models, we calculate with them, not deduce from them. Furthermore, there is a certain tradition in physics of gluing together mathematical models known to be inconsistent in ways that the inconsistencies can be simply swept under the rug in most situations, and physics doesn’t seem too much the worse for wear for it, although it isn’t a perfect solution. Because of such considerations you might also think that if mathematics is inconsistent that mathematicians should stop being mathematicians. But I don’t think that they necessarily should take that step, if they can reconcile themselves to the inconsistency of mathematics. Some people just like doing mathematics, and if they like mathematics they shouldn’t let deficiencies in it stop them, just as I don’t let deficiencies within philosophy stop me from doing it.

So suppose someone were to assert that all sufficiently powerful mathematical systems (all of those which can’t be proved consistent) were inconsistent. Indeed I am tempted to make such an assertion given some of the thoughts I have put forward here. Could we refute it? Indeed we could, although obviously we couldn’t demonstrate the consistency of any particular system. What we could demonstrate is a certain kind of relationship between two or more systems, such that only a certain number of them, at most, can be inconsistent. In the case of two systems this would boil down to showing that one of them is consistent if and only if the other is inconsistent, and vice versa. This would demonstrate that not all sufficiently powerful mathematical systems were inconsistent. Indeed it would even validate the both of them, since we know that by being of sufficient power they can both model the other we know that the inconsistency in the inconsistent one of them cannot be demonstrated, because if it could be then a proof could be formulated in the consistent system of its own consistency using the demonstrated inconsistency in the other system and the particular relationship between the two systems mentioned earlier to produce a proof of its own consistency. And since we know that is impossible it must be the case that the inconsistency in two such systems related in this way can never be demonstrated, making even the inconsistent one “safe” for all practical purposes. (This very fact may suggest that no two systems can be related in this way, although it isn’t conclusive evidence.)

November 23, 2007

The Myth Of The Perfect Language

Filed under: Logic — Peter @ 12:00 am

Language, especially written language, has always fascinated people. Its expressive power strikes some as magical, and the idea that somehow with the right language the world could be controlled is one that reoccurs in many myths. Even in this modern age many people trust without thinking whatever is written, possibly because once the idea expressed in writing is entertained it seems real. But obviously no language can possibly convey only true ideas, and words certainly can’t exercise any form of control over reality. Still, the idea that many problems could be dissolved with the right language remains, that somehow if we could just express our ideas in the right way that everything would become clear. And usually it is supposed that some kind of logical/mathematical language could fill this role if anything could. But is such a language possible, and would it really be useful?

Before we can turn to those questions we must first examine more closely what exactly a perfect language is supposed to do, which will place limitations on the form it can come in. Three common requirements are that: statements made using the language must be completely unambiguous, the meaning of words must be completely captured or capturable, and that every claim that must follow from a set of claims expressed in the language can be deduced in a completely mechanical way, without any need to special intuition. Let’s consider unambiguity first since it is the simplest. Ambiguity is a problem in our normal languages because the things we say can be taken in more than one way, and it is quite possible that some of the things they can be taken to mean are true while the others are false. Not only does this raise problems for the communication of ideas, but there is the possibility that the ambiguity will infect our thinking about the matter as well, where we will proceed first on the basis of one meaning and at some later point switch to a second, so that we might reach conclusions that are impossible if the statement was taken to have a single fixed meaning. But it is not entirely clear how ambiguity can be avoided. It is true that we may attempt to provide precise definitions for all the terms we use, but those definitions must themselves involve words that are undefined at some point, because the chain of definitions must come to an end, obviously. And while contextual or circular definitions are possible these guarantee the existence of ambiguity rather than remove it. It is possible, however, to construct a language that is completely unambiguous so long as a certain facts hold, although those facts themselves can never be conclusively verified. Suppose, for example, that we were considering a term defined in such a way that the definition directs us at some part of the world, and that it can direct us only at one particular thing, at most. If this language is to be unambiguous then it must be that there is such a thing that we are actually directed at, and, if this language is to be more than private, that everyone is directed at the same thing by their understanding of the definition. And, although we can test these assumptions, we can never completely verify them, because it is always possible that other people are directed at something slightly different and that the difference has simply yet to be revealed. But if they do hold then those terms are completely unambiguous and so will be any terms defined on the basis of them.

The next task for the perfect language is to capture completely the meaning of words. Now obviously there is a sense in which even our ordinary language captures completely the meaning of words. After all, if I write “justice” down that word captures completely the meaning of justice, at least for me. So what is desired is something more than this, something connected to unambiguity, namely that by writing a word down we somehow fully expose its meaning, or that if we can’t do so then we can’t even write it down. Obviously for convenience any language will contain symbols that are taken to refer to things, because it would take to long to write down the complete meaning for every word every time we use it. Still, in the perfect language such symbols would be by themselves meaningless until a complete meaning is given for them that they can be said to stand for. As I mentioned previously the idea that the complete meaning is to be expressed when we use words in the perfect language is connected to unambiguity because to be unambiguous definitions must be provided that pin down the meaning of the word, and some expression of the word’s complete meaning would fill that role perfectly. However, this requirement does ask something of the perfect language over and above unambiguity; the language must also be able to capture the meanings of the words we use ordinarily. And this is where the perfect language is something above the logical languages that already exist, because while we can simply stipulate that a particular letter stands for some property and another for some object it isn’t the case that we can also express the meaning of the words we usually refer to those properties and objects with, or at least it’s not obvious how to do so in any remotely straightforward way.

And our third requirement is that the perfect language must come with rules for exploring the connections between claims, such that if one claim follows necessarily from another, or is incompatible with another, it can be determined in a purely mechanical way with no need for intuition. The idea is, I suppose, that once the meanings are completely revealed that there is no barrier to manipulating them to come to these conclusions. Again logical languages can be held up as an example of this, because given any collection of statements we can apply a number of rules to arrive at further statements that must hold given them. Of course such languages are limited in some ways, with the rules for deduction being a completely separate from the statements themselves. That, I suspect, is a bit of an error in language design. How truth works with respect to a given domain is as much a part of the meaning of the terms as are other facts about them. And thus if a perfect language could be constructed I would suspect that for the most part that the deduction rules would somehow be embedded in the complete meanings, which also allows for different deduction rules to apply in different domains.

So, can we possibly construct such a language, and if we could would we want to? With respect to the first part of the question there are a number of people who believe that the kind of language I have described is an impossible one, in part because of certain limitations concerning logical languages. For example, it is well known that logical languages, which seem simple enough, are incomplete, that their rules for deduction will necessarily fail to arrive at every possible statement or its negation no matter how many statements we begin with, even if it would appear that there must be some fact of the matter about them. And it is known that more expressive logical systems must be either inconsistent or that there can never be enough rules to express all the valid patterns of deduction. But I don’t take these results as necessarily defeating the possibility of a perfect language. Certainly they do express limitations about what formal languages can do, but who is to say that those limitations are in some way detrimental to the project? Incompleteness need not bother us if we simply accept that there is not a determinate fact of the matter about every possible statement, despite initial appearance to the contrary, and I think there are good reasons to think exactly that, which I won’t go into here. Similarly, there is no need to be bothered by the limitations on more powerful deduction systems. We might suppose that only a limited number of deductive rules are really needed, and that the statements that can’t be reached by them aren’t really true at all. Obviously this reduces the power of the system, but, again, who is to say that the more powerful system really reflects the kind of truth and necessity we are interested in? Or, we might accept inconsistency, so long as we are working with deductive rules that are able to contain it. Again, going into all the details is somewhat tangential to the topic at hand, so I will omit them. It suffices to point out that limitations on formal languages may be taken not as failings, but on real limitations on truth and necessity that overthrow certain faulty intuitions we had about them previously. And, with these considerations aside, the perfect language seems to be a live possibility. Although it might be quite difficult to construct it there don’t seem to be any necessary barriers to doing so, given that every meaning seems expressible in ordinary language, and that we might very well simply go through the dictionary replacing terms with their perfect language equivalents, it appears that with enough dedication we could bring the language into existence.

But would the perfect language really be useful to us, or would it simply be a curiosity? It seems obvious from a certain perspective that the perfect language is desirable; we want to eliminate any ambiguity in order to better evaluate our claims. And it would certainly be nice to have all the valid rules for deduction laid out in front of us for complex topics, since they can be often a matter of dispute. But, on the other hand, while the perfect language might be helpful it doesn’t seem to solve our real problems. The real problems are not in determining which claims follow from each other or what exactly the meaning of words are, the real problems are in determining the correctness of claims and how well our terms actually track what we suppose them to refer to. And these are problems that the perfect language simply can’t help us with, it can only proceed from the claims and definitions we provide, not validate them. And so, while the perfect language probably wouldn’t hurt us, there is an argument to be made that focusing on it is to direct our attention at the wrong problems, and so that actually trying to bring it into existence would be a waste of our energy given the other more pressing issues that demand our attention.

Next Page »

Blog at WordPress.com.