To me it seems obvious that philosophy would benefit from a more formal method. But philosophical arguments are not easy to capture with formal logic, or at least the arguments of modern philosophers aren’t. It is true that the Greeks and Medieval philosophers incorporated the occasional argument that was easy to formalize, a task often given to students, but modern philosophy has left such straightforward reasoning behind. (Not because philosophers are trying to be obtuse, but because it is hard to capture in a syllogism the arguments contained in current philosophical literature, at least not without making the premises too loaded.)
And a suitable formal method for doing philosophy is unlikely to fall into our laps. Part of the process of developing such a formal method is determining where existing formal systems have difficulty in capturing philosophical arguments. Thus I will here attempt to loosely formalize Rawl’s veil of ignorance argument (which I have already discussed here and here) using set theory and first order logic. I picked the veil of ignorance because it is relatively straightforward, but at the same time doesn’t seem to have an obvious way of being formalized. Of course I don’t expect to be fully successful in formalizing the argument, but ideally the places where I have difficulty reveal what extra features a formal method for philosophy must have over and above those already available to us.
So then let us start with some definitions.
In our formalization we will have only two kinds of things, rules and objects.
Let R be the set of all rules, and P be the set of all objects.
Let A be a set that contains all members of society. … A ⊆ P
Let Wb(a, b) be a function that maps tuples of objects and rules to natural numbers that represent the wellbeing of that object … Wb: P x R -> N
The veil of ignorance proposal is that rules that are just are ones that would be chosen by a self interested agent given that it was equally likely that the agent was any member of the set A.
If the agent is maximizing its expected value then it chooses the r ∈ R such that ∑ Wb(a, r), for all a ∈ A, is maximized. (principle 1)
On the other hand if the agent is minimizing its possible losses then it chooses the r ∈ R such that the minimum Wb(a, r) is maximized. (principle 2)
And this rule is fair, according to Rawls, because the method by which it is chosen doesn’t favor any one a ∈ A over another.
And the objections that were raised to this proposal were that:
There was no definition of how to determine the contents of A, which affects which r ∈ R would be chosen.
And secondly, that agents don’t necessarily want to maximize their own wellbeing alone. Various agents put more or less weight on their own wellbeing, and more or less weight on the wellbeing of other objects (which I take here can be things like “art” and “the environment” as well as other people). And one proposed fix to this problem was to take away the interests of the agent out of the deciding process.
Let the interests of each a ∈ A be represented by the sum: k1*Wb(a, r) + k2*Wb(o1, r) + k3*Wb(o2, r) + … = I(a, r), where the k values stand for constants that serve as weights for the various terms of the sum.
Then the procedure would be to find the r ∈ R that maximized ∑ I(a, r), for all a ∈ A (assuming we are working with principle 1, but principle 2 suffers from the problem pointed out next as well).
But this is no longer a fair principle. Consider b, c ∈ A such that I(b, r) = 2*Wb(b, r) and I(c, r) = 1*Wb(c, r). Which means that b values his or her own wellbeing twice as much as c. But now the rule we choose will favor the wellbeing of c over b, which is unfair.
In any case Rawls lays out two subsequent claims about the nature of rules that would be selected as just by this method. However, I shall only tackle one of them here. Rawls says that the considerations described above require that any just rule will give people the most extensive basic liberty possible, compatible with a similar liberty for others. This seems to view liberty as some good of which there is a fixed quantity (not necessarily a bad way of thinking about it, since giving me the liberty to punch you would take away your liberty not to be punched), and furthermore that having liberty increases the wellbeing of the individual that has it.
So let us say that our rules can distribute a varying amount of L, liberty, to each a ∈ A. Under principle 1 it wouldn’t matter how we distributed L, because the total improvements would be the same. But if we were working under principle 2 then it would be as Rawls says, that we would want to give everyone an equal amount of L.
But this L is not how real liberty works. When real people are given liberty some of them use it to harm other people, and so when distributing L not only do we increase that person’s wellbeing, but we may decrease the wellbeing of others, depending on that person’s interests. And thus it wouldn’t be fair to give everyone equal amounts of liberty (this is why we throw people in jail). So Rawls must be wrong in saying that equal liberty should be given to all, or at the very least that it isn’t justified by his own criteria for picking rules.
So, what can we learn from this exercise? Well most strikingly the principle of telling when a rule is fair (when it is chosen by a method that doesn’t favor one person over another) is hard to formalize, because it requires the ability to examine the structure of a procedure or method. Now while we can write down the procedure itself formally that procedure is not really a mathematical object, properly speaking, that we can operate on. But perhaps it should be. Another problem came when dealing with the nature of the rules and agents themselves. Both rules and agents must be fairly complicated objects, agents have different preferences and interact in different ways, and rules govern those interactions, as well as the distribution of resources. To discuss which rules are better than others we needed to appeal to those facts, and thus how agents interact, and how the rules affect those interactions, would have to be formally specified. Both problems seem to be rooted in an inability to properly formalize objects with a complicated structure that can interact with each other.