Yesterday I discussed how probability seems to cause problems with the way we conventionally reason using logic. Today I will introduce a logical notation that allows us to reason on the basis of probability without running into those problems (or at least, the problems I mentioned; it isn’t possible to rule out all possible problems). The difference is not one of substance, essentially the same rules of deduction hold, difference is that the notation makes certain mistakes impossible, and thus may save us from certain paradoxes which arise essentially out of our previous notational vagueness.

Normally in logic we work with a sequence of propositions and from them deduce new propositions, producing a proof. In this variant we replace that way of doing things with one in which we deal with sets of outcomes (each of which may correspond to a proposition) and reason on the basis of those. The basic notation then, which serves basically the same role in the proof as a proposition did previously, is as follows:

A // B

Where B is a set containing a number of mutually exclusive and exhaustive possibilities (it is impossible for all of them not to be the case), and A is a non-empty subset of B (A⊆B).

These outcomes may very well be propositions from classical logic. For example, {p} // {p, ~p} is a possible line in a proof using this notation. As is {p∧q, p∧~q} // {p∧q, p∧~q, ~p∧q, ~p∧~q}. But {p∧q} // {p, q, p∧q} is not, because p, q, and p∧q are not mutually exclusive and exhaustive. On these sets we need to define three operations, ∩, which works as normal, -, again as normal, and x. AxB designates the set that contains all the possible unique outcomes that result from combining the outcomes in A and B. For example, if A is {p, ~p} and B is {q, ~q} then AxB is {p∧q, p∧~q, ~p∧q, ~p∧~q}. But note that AxBxB is equal to AxB, because applying xB to AxB doesn’t add any new possible outcomes.

As with classical logic, a proof using this notation consists of a number of lines, and from previous lines we may deduce new ones. There are five simple rules of deduction (which contain some redundancy), and one complicated one. The simple rules are as follows:

1)

A // Q

B // Q

—————-

A∩B // Q

2)

A // Q

—————

AxR // QxR, where R is any set of exclusive and exhaustive possibilities

3)

———

Q // Q (this may be deduced at any time, for any Q)

4)

A // B

C // D

—————-

AxC // BxD

5)

AxC // BxD

—————-

A // B, where A⊆B

We also have what may be called hypothetical introduction:

| A // B (for any A and B)

| …

| ∅ // Q (for any Q)

B-A // B

The lines elided by … are simply more lines in the proof using any of the rules of deduction. However, once past the area designated by |s, those lines may no longer be used to deduce new lines in the proof.

Allow me to illustrate this process by translating a simple proof in classical logic into this notation:

{p∧r} // {p∧r, p∧~r, ~p∧r, ~p∧~r} – premise

{p∧~q, ~p∧q, ~p∧~q} // {p∧q, p∧~q, ~p∧q, ~p∧~q} – premise

| {q} // {q, ~q} – hypothetical introduction

| {p} // {p, ~p} – rule 5

| {p∧q} // {p∧q, p∧~q, ~p∧q, ~p∧~q} – rule 4

| ∅ // {p∧q, p∧~q, ~p∧q, ~p∧~q} – rule 1

{~q} // {q, ~q} – conclusion

This roughly parallels the proof of ~q from p∧r and ~p∨~q.

And given this notation it is easy to connect justification and probability. First we define A // B as meaning that A is a set containing the most probable outcome in B (although it may contain other outcomes as well). Thus from A // B we can conclude that we are justified in believing a, where a∈A, if and only if there are no other members of A. Which is to say that we know with certainty that a is the most likely of the possibilities we are considering. Similarly, we can conclude that we are justified in not believing a particular outcome if it isn’t in A. And together these definitions imply that B // B represents a state of complete ignorance.

And, as a footnote, this blocks the “paradox” derived yesterday for obvious reasons (specifically that it isn’t the same set of possible outcomes that is being considered).