It would seem that part of the justification for any claim contains, in some way, a theory about how knowledge and justification work. This might not seem completely obvious, but you can always uncover such claims simply by asking “why?” enough times concerning the justification for the claims you encounter. For example, suppose that we know it is snowing outside (or at least think we know that). But how do we know it? Well, we have the visual experience of snow falling, and we know that when we see snow falling usually it is the case that snow is actually falling. But how do we know that this correlation is itself a reliable one? We might say that we have observed the correlation to hold in a large number of cases. But why does that entail the correlation probably holds now? At a certain point we are just going to have to assert that this is the way that knowledge works, that some principle of generalization (and possibly other principles as well) justify claims in a kind of primitive way, such that no justification can be given for those claims.
This might seem like a problem, and it would be a problem if we just let the matter rest there. Who in their right mind would let all our intellectual accomplishments rest on a foundation that was essential a matter of fiat? Obviously we can’t simply find some other principles to justify them from, because then the same questions will arise regarding those principles, and so on. This vicious circle is reminiscent of the problem of induction, which presents us with a similar quandary, namely that to justify induction we seem to require induction, and here it would appear that to justify a theory of knowledge we require a theory about knowledge. Unfortunately this problem isn’t equivalent to the problem of induction, and the solutions to that problem, which usually involve simply relaxing our standards regarding how induction is to be justified, won’t work here.
The solution, if there is a solution, must involve bootstrapping, a process by which we move from knowing nothing about knowledge (what counts as justification for a claim, and so on) to knowing some things about knowledge. And obviously that is only possible if we have somewhat relaxed standards regarding knowledge, meaning that we aren’t required to deduce facts from some absolutely certain foundation in order for them to be knowledge, because there is no absolute foundation to proceed from in this situation. (Of course if you hold knowledge to such strict standards you probably haven’t been able to resolve the problem of induction, in which case that would be a much more pressing problem than this more fundamental question, and so I assume that any reader of this piece is willing to grant me this if they are with me so far.) I propose that, instead of trying to devise a way conduct such a bootstrapping, we simply look at how people have actually come to have some idea about what knowledge is in a very general way, with the idea that if we haven’t successfully bootstrapped ourselves into a theory about knowledge that is correct in at least some significant ways then this entire enterprise is doomed, because to come to a conclusion about how such bootstrapping works we must lean on certain knowledge. This is not to say that those pieces of knowledge are part of the actual bootstrapping process, but they are a part of how we reach conclusions about it. If we really didn’t have any knowledge we would have to first bootstrap ourselves into some knowledge about how knowledge works before we could come to those conclusions (thus the bootstrapping necessarily precedes any knowledge about it).
One thing people generally rely on in a state of complete ignorance about knowledge is instinct, as we are hard-wired to draw certain conclusions from evidence. And indeed before any theory about knowledge was developed people were probably proceeding on these instincts to distinguish between trustworthy claims and those that could be discarded. The problem with instinct though is that it is essentially a black box as far as we are concerned (it is only much later in our intellectual development, with a number of complicated theories under our belt, that we can return to instinct and explain it and justify it by arguing that it has survival value). It can’t be the foundation for a theory about knowledge because the there is no justification for the instincts themselves. The only option left then for the people in this situation is trial and error. Suppose that we begin with the hypothesis that some beliefs are better to have than others, and that we will call the better beliefs knowledge. Obviously this is easily testable by trial and error, because if you actually try to treat all beliefs as of equal value then you will find yourself running into a lot of doors, which clearly shows that treating all beliefs as equal is a bad idea. And naturally a number of specific beliefs could also be tested in this way, but this is not really what we are after, we are after a theory about what differentiates these good and bad beliefs, and ways to arrive at the good beliefs and to avoid the bad beliefs. Again people in this situation might proceed by trial and error, considering a number of different strategies for belief formation and seeing which ones form more good beliefs than bad beliefs (over a number of generations probably, passing down what they have learned from one to the next).
Obviously for this to work the beliefs produced must themselves be testable, or, for obvious reasons, it becomes impossible to make judgments about which ways of forming beliefs are superior. And this might appear to be a problematic move in its own right, because doesn’t it presume that we can know whether the specific beliefs produced are good or bad? Isn’t it possible, in our position of complete ignorance, that our perceptions are so wildly mistaken that we can’t make accurate judgments about which are good and bad? Indeed that is a live possibility if we take these claims as really about the external world in some way. But there is no need to do this, instead we can take them as elliptical statements that are really just about which perceptions we will have. This eliminates any possibility of error at this stage in the game when it comes to judging which beliefs are good and which are bad. Of course, later in the process of bootstrapping, the claim that an external world exists and that the beliefs under consideration are really about it will emerge, either as a claim on its own that is considered knowledge or as part of our theory about knowledge which is justified simply because it is a very successful way at arriving at good beliefs (the latter more accurately reflects how we actually seem to treat the claim).
Obviously, as with solutions to the problem of induction, theories about knowledge itself arrived at by this bootstrapping process can never be completely certain. But that seems to be the fate of knowledge in general, while we can justify claims we are forever unable to achieve perfect certainty. On the other hand, the only way that theories about knowledge itself developed in this way could turn out to be substantially faulty, at least in the domain of these testable beliefs, would be if certain kinds of generalities didn’t exist, if there were no patterns about which processes lead to more good beliefs than bad beliefs, or if which process will be successful is in constant flux. But if such possibilities were indeed the case not only would bootstrapping fail to get anywhere (and thus the fact that such bootstrapping has already occurred is evidence that they aren’t), but it would be impossible to have any theory about knowledge by any means. And so bootstrapping, while not perfect, is as good as it gets.
The question then arises: how can we extend a theory about knowledge developed in this way, since this is the only way to develop a theory of knowledge from scratch, to non-testable domains? And there we have a serious problem, because if we are dealing with a non-testable domain it is hard to even say what knowledge consists in. To even get bootstrapping started we had to begin with the idea of a good belief in order to draw a distinction between two classes, even though that way of drawing the distinction is itself later left behind (replaced with the idea of an accurate belief, at about the same time that the idea of an independently existing external world appears). But if a domain is really non-testable how can we divide it in this way? Perhaps some definition of accuracy could immediately play the required role, but that would seem to require the ability to establish correspondences between claims and features of that domain, requiring us to have access to it, which in turn would imply that claims about it are indeed testable. But, assuming these problems can be solved, it remains a mystery why we should even care about these domains, because clearly if we can’t make testable claims about them then they can’t affect us in any way (otherwise we would be able to make claims about their effects on us, test them and go from there, which is essentially how we proceed with the ordinary world, starting with perception).
Even if were to simply ignore those problems it would seem that the best we can do is simply to extend the theories about knowledge developed by bootstrapping to cover that domain, which raises yet another difficulty since such theories are going to endorse some kind of generalization from particulars, which requires some kind of access to individual facts to proceed, and which we clearly don’t have when it comes to these domains. All of these problems, taken together, strongly imply that when it comes to things we can’t perceive in some way we can’t have knowledge about them. Which may seem to be contradicting the obvious, since many will claim that we have mathematical knowledge, and it seems quite clear that if some mathematical domain exists that we don’t have direct access to it such that we can test mathematical claims apart from their proof and axioms. But I must question the claim that mathematical theorems are knowledge. What justifies the assertion that a theorem produced by deduction from axioms is correct? Certainly we may endorse the process of deduction, but the axioms themselves are without justification. A better defense of mathematics is to point out that mathematical theorems are often adopted by science for its uses, producing claims that we do endorse as knowledge, and thus that in some way the theorems so adopted must be “true”. The problem here is that for every theorem so adopted there are a number of others, proved from slightly different axioms that are not used, and must therefore be “false”. Consider non-Euclidean geometry, for example. Since a certain non-Euclidean geometry is adopted by physics we might say that it was right. But then all the other variant non-Euclidean geometries were wrong because they said false things about points and lines. Taken at face value this would mean that mathematics is largely a failure at producing knowledge because it produces many more false theories than true ones. Of course that conclusion could be avoided by saying that what lines and points are is simply defined by the axioms of each geometry (a reasonable claim), but then the theorems of mathematics are no longer knowledge about some domain. Rather they are simply a kind of game played with symbols or in complete abstraction from any content, which happens to be useful on occasion; but, by itself, the game is simply a game. (Which is not to put down mathematics, the occasional times when it is useful make it a worthwhile game to play, but it does mean that the game by itself isn’t telling us about anything except the game itself.)
My really worry is, as always, with philosophical knowledge (especially metaphysics). Is it really all testable in some extremely oblique way? Or is philosophy, or some subset of it, simply a game devoid of content, like mathematics, but which, unlike mathematics, never proves useful to anyone but philosophers?