Smith discusses the crucial difference between science and philosophy, and how human fallibility has been used to defend skepticism.
In my last article I claimed that all of us probably have false beliefs that we will never be able to identify. I also suggested that the best way to keep these false beliefs to a minimum is to focus on our fundamental philosophical views and to integrate those views into our everyday lives until we can apply them habitually to every judgment we form.
Appeals to Authority
Of course, this method is no panacea against false beliefs; even the most reasonable people will invariably fall prey to the pitfalls of human fallibility from time to time. The trick is to not only to minimize the number of our false beliefs but also to quarantine our errors so that they do not infect our other beliefs any more than is absolutely necessary. This is where some fundamental distinctions, such as that between the specialized sciences and philosophy, may play a useful role.
During my high school years one of my best friends was a genius in physics, who went on to earn two PhDs—one in physics, the other in applied mathematics. We argued incessantly—or so it seemed at the time—about the relationship between physics and philosophy, especially in regard to the Heisenberg Uncertainty Principle. My friend defended the “Copenhagen” interpretation, according to which our inability to simultaneously measure the momentum and position of subatomic particles (owing to the disturbing influence of the beam of light used in measuring these things) definitively proves “indeterminism” in the subatomic realm. I argued, in opposition, that this conclusion is unjustified, that it is an illegitimate leap from an experimental problem of measurement to a metaphysical conclusion about causation.
Now, I don’t wish to jump once again into the brambles of this controversy, which I discussed many times in subsequent years, by reviewing the arguments for and against indeterminism. I raise the issue here to highlight an important issue, namely, the role of expertise in the physical sciences versus philosophy. When a scientist (in whatever field) describes the results of an experiment, her testimony may safely be accepted by nonscientists (provided her findings have been corroborated by other scientists). But when a scientist, however brilliant, draws philosophical conclusions from her experiments, her expertise ends there. There are no “experts” or “authorities” in philosophy, which is essentially refined common sense based on observations and experiences available to every person. The scientist, of course, may also be a competent philosopher, in which case her metaphysical conclusion may be correct. But she must argue for her philosophical position as any philosopher would. She cannot legitimately pull rank by invoking her expertise in a particular science. If a philosophical doctrine makes no sense, it does not become more sensible by invoking science.
This problem is not confined to the physical sciences. When some early anthropologists cited the substantial differences in moral beliefs and practices among different cultures to “prove” the philosophical doctrine known as moral relativism, they made the same kind of illegitimate leap. This is an important point to keep in mind. We must be clear about the limits of expertise, especially given how many of our beliefs about the world are based on the testimony of scientists, historians, and other authorities. Empirical facts are one thing, but the philosophical conclusions we may logically draw from those facts are another thing entirely. To understand the line of demarcation between specialized disciplines and philosophy may significantly reduce the number of one’s false or unjustified beliefs.
Some Implications of Fallibility
Ayn Rand astutely pointed out that epistemology—the science of knowledge—is needed only because of human fallibility. An infallible being, a being for whom error is impossible, would have no need to verify or rank his beliefs, so he would have no need for standards of justification or for concepts like truth, falsehood, probability, and certainty.
Human fallibility has sometimes been used to defend various types of epistemological skepticism, most notably in the writings of the eighteenth‐century Scottish philosopher David Hume. In Treatise of Human Nature (1739), Hume appealed to fallibility as the basis for his argument that our claims to hold some beliefs with certainty cannot be justified by reason. Total skepticism is absurd, according to Hume: “neither I, nor any other person was ever sincerely and constantly of that opinion.” Hume continued:
Nature, by an absolute and uncontroulable necessity has determin’d us to judge as well as to breathe and feel; nor can we any more forbear viewing certain objects in a strong and fuller light, upon account of their customary connexion with a present impression, than we can hinder ourselves from thinking as long as we are awake, or seeing the surrounding bodies, when we turn our eyes towards them in broad sunshine. Whoever has taken the pains to refute the cavils of this total skepticism, has really disputed without an antagonist….
Hume’s purpose was to show that our convictions of certainty, especially in our beliefs about cause and effect, “are derived from nothing but custom; and that belief is more properly an act of the sensitive, than of the cogitative part of our nature.” In short, our certain beliefs are a feeling based on custom and habit; they cannot be justified by reason.
Hume employed an ingenious, if somewhat sophistical, argument to support this conclusion. He began his discussion, “Of scepticism with regard to reason,” as follows:
In all demonstrative sciences the rules are certain and infallible; but when we apply them, our fallible and uncertain faculties are very apt to depart from them, and fall into error. We must, therefore, in every reasoning form a new judgment, as a check or control on our first judgment or belief.
Fallibility often demands that we review a primary judgment of truth with a secondary judgment that assesses the reliability of our initial judgment. But this secondary judgment (sometimes called a reflexive judgment) is also fallible and uncertain so, according to Hume, it merely adds more uncertainty to our primary judgment. And so on down the line, with each new judgment about previous judgments adding even more uncertainty to the overall situation. If we imagine an infinite chain of such judgments, the uncertainty of our belief will become increasingly severe and eventually reduce the probability of our primary judgment to a very low level.
A brilliant reply to Hume was penned by his fellow Scot, Thomas Reid, the founder of the “common sense” school of philosophy—an approach that exerted considerable influence in Scotland and America well into the nineteenth century. I cannot here explain Reid’s brilliant refutation of Humean skepticism in detail, so a summary of some basic points will have to do.
In Essays on the Intellectual Powers of Man (1785), Reid made a general observation that had been made many time before.
To pretend to prove by reasoning that there is no force in reason, does indeed look like a philosophical delirium. It is like a man’s pretending to see clearly, that he himself and all other men are blind.
Reid then cut to the heart of the matter. Whereas we normally contrast probability with logical demonstrations or with certainty, Hume chose instead to contrast probability with infallibility. But since infallibility is not an option for fallible humans, this merely excludes certainty at the outset, and no useful purpose is served. To concede that all our judgments are fallible merely means that we should exercise caution in forming those judgments; it does not mean that we are unable to justify claims of certainty.
The skeptic, according to Reid, “makes no objection to any part of the demonstration, but pleads my fallibility in judging. I have made proper allowance for this already, by being open to conviction.” This observation was the basis for Reid’s critique of Hume’s claim that each additional fallible judgment about the reliability of our initial judgment merely makes the initial judgment more uncertain. If we back up the testimony of an eyewitness by bringing in the corroborating testimony of another person, this makes the initial testimony more certain, not less. The same is true of our reflexive judgments in which we assess the reliability of our primary judgment. To conclude, by means of additional (reflexive) judgments, that our primary judgment was based on sufficient evidence is to render the initial judgment more certain, not less. It is pointless and absurd to drag in the fallibility of reflexive judgments and conclude that they add to the total uncertainty of a belief, for fallibility should not be contrasted with certainty to begin with. Fallibility is factored into all our judgments, both primary and reflexive, at the outset. Here is how Reid put it:
A wise man who has practised reasoning knows that he is fallible, and carries this conviction along with him in every judgment he forms. He knows, likewise, that he is more liable to err in some cases than in others. He has a scale in his mind, by which he estimates his liableness to err, and by this he regulates the degree of his assent in his first judgment upon any point.
I realize that my sketch of Reid’s argument may be a bit difficult for some readers to follow, so it may help if I point out that Reid was making basically the same point that Ayn Rand (and other philosophers) later made. Fallibility is an inherent feature of human nature and of our knowledge claims, and this is why we should be open to persuasion—or “conviction,” as Reid (quoted above) put it. And since fallibility generates the need for cognitive standards that distinguish among different degrees of conviction, such as probable and certain beliefs, it is quite illegitimate to invoke fallibility as the grounds for claiming that certainty is impossible. Contrary to Hume, fallibility should be contrasted with infallibility, not with certainty.