Concluding Remarks on Fallibility and the Moral Implications of Beliefs
Smith explores the indispensable role of value commitments in our quest for knowledge.
In my last essay I briefly discussed some issues relating to human fallibility, and I summarized Thomas Reid’s response to the mitigated skepticism of David Hume. Reid’s approach influenced a number of “ordinary language philosophers” during the twentieth century, as we see in the writings of J.L. Austin (1911–60). The fact that man is “inherently fallible,” argued Austin, does not entail that he is “necessarily so.”
Machines are inherently liable to break down, but good machines don’t (often). It is futile to embark on a “theory of knowledge” which denies this liability: such theories constantly end up by admitting the liability after all, and denying the existence of “knowledge.” (“Other Minds,” in Philosophical Papers, 1961, p. 66.)
According to Austin, if the skeptic wishes to attack a knowledge claim for which evidence has been provided, he must attack the evidence itself. It is illegitimate to appeal solely to fallibility as a reason to doubt. “[B]eing aware that you may be mistaken doesn’t mean merely being aware that you are a fallible human being; it means that you have some concrete reason to suppose that you may be mistaken in this case.”
Michael Polanyi made a similar point in his highly interesting book Personal Knowledge (1962, p. 314):
To postpone mental decisions on account of their conceivable fallibility would necessarily block all decisions for ever, and pile up hazards of hesitation to infinity. It would amount to voluntary mental stupor.
As I pointed out in my last essay, Ayn Rand hit the nail on the head when she pointed out that an infallible being would have no use for cognitive standards. To put this another way, an infallible being, a being incapable of error, would have no need to make cognitive judgments of truth and falsehood, certainty and probability, and the like. Only with judgments do cognitive standards apply. A mere concept or idea is neither true nor false. If I form the concept of a winged unicorn, without affirming or denying the ontological status of this concept, it would be improper to label my concept true or false. Only if I make a judgment about this concept, as when I affirm or deny the existence of unicorns, do cognitive standards come into play.
This may seem an obvious point but it is an important one nonetheless. Fallibility pertains primarily to our judgments, and this is why our judgments need to be justified with sufficient evidence and/or arguments. Of course, the topic of what does and does not qualify as sufficient justification is a contested matter in its own right. This is why we frequently rank our judgments on a scale from possible to probable to certain. If there is some but not much evidence for p (a proposition), we may say that p is possibly true. If the evidence is substantial but not conclusive, we may say that p is probably true. If the evidence is overwhelming and if there exists no credible counter‐evidence, we may say that p is certainly true.
But what about so‐called analytic truths, as we find in mathematics? Or what about the laws of logic? We cannot doubt the proposition that 2+2=4, nor can we doubt the Law of Identity (A is A), so perhaps we should call these and similar judgments infallible. This would be a mistake. Fallibility pertains not to particular judgements of truth but rather to the nature of the human intellect. A fallible being is perfectly capable of making judgments that are absolutely certain, but this does not render his ability to judge any less fallible. After all, even a person who understands the rules of addition may err when adding a long column of numbers; this is a manifestation of his fallibility. Of course, it is far more difficult to imagine error when dealing with the self‐evident axioms of logic; we may even say that an erroneous judgment about a self‐evident truth is inconceivable (though we might claim self‐evidence for a judgment that is no such thing). But, again, to say that a judgment is absolutely certain is not to say that the being who renders this judgment is infallible. Fallible humans can form judgments that are beyond any realistic possibility of error. Infallible judgments are possible only for an infallible being. For an infallible being to declare some truths “self‐evident” would serve no purpose, since all of his beliefs would be self‐evident from our perspective. Only a fallible being needs to distinguish self‐evident truths from other kinds of knowledge.
I now wish to return to the question that started my lengthy series on epistemology: May we properly speak of beliefs per se as “moral” or “immoral”? We normally speak of beliefs and persons as being rational or irrational, reasonable or unreasonable, but does it add anything substantial to the mix if we dub certain beliefs “immoral”? As I explained in an earlier essay, I have gone back and forth on this issue for many years. I previously offered the example of a militant anti‐Semite. Should his belief be condemned as immoral, even if it has no practical implications for the anti-Semite—that is, even if the person does not advocate any kind of persecution or discrimination?
If we tend to judge as immoral the beliefs of an anti‐Semite or racist, this is usually for two reasons: First, we assume that such beliefs will have immoral practical consequences. Second, we tend to view such beliefs as so outrageous and unjustifiable that we find ourselves unable to believe that they could honestly be defended by any reasonable person. This second assumption is the most pertinent to the problem I am considering here.
The cognitive quality of a person’s beliefs reflects on his character, so when a person expresses a belief that is especially outrageous or off‐the‐wall, we tend to question the commitment of that person to the pursuit of truth. We suspect that he is gerrymandering whatever “evidence” he offers to fit a preconceived conviction, while refusing to consider seriously any evidence or argument that would call his conviction into question. We assume, in other words, that the anti‐Semite or racist is not interested in truth but is merely using his beliefs to prop‐up his unexamined feelings and prejudices. It is in this sense that I think we may properly call a belief “immoral”—not in a direct sense but as an indication of a person’s lack of commitment to reason.
Of course, this inference should not be made with every irrational belief we may encounter. Even reasonable people may lapse into irrationalism from time to time. None of us is immune to the perils of fallibility, and all of us may occasionally let our feelings overwhelm our reasoned judgments. Only when an outrageous belief is persistently proclaimed and defended—typically with unshakeable certainty—do we call into question the intellectual integrity of the believer. And to doubt or deny a person’s intellectual integrity is clearly a moral issue.
I will conclude this essay (and this series) with a few comments about commitment to values. Every judgment we make implies a commitment to the abstract value of truth—provided that we expect others to take us seriously. Our judgments also embody our commitment to the cognitive standards (coherence, logical rigor, evidence, and so forth) that make it possible for us to distinguish between true and false judgments. Reasonable people will take our judgments seriously only if they assume that we have committed ourselves to these cognitive values. Without this assumption our judgments will probably be dismissed out of hand as those of a fool or of a thoroughly irrational or dishonest person with no regard for truth.
In the realm of knowledge a rational person believes as she must, not as she can. To reason is to be committed to the impersonal rules of logic, canons of reliable evidence, and so forth, in reaching a conclusion, however we may personally feel about that conclusion. A belief embraced willy‐nilly because it comforts us or conforms to our prejudices is not based on reasoning but on a refusal to reason. A flagrantly irrational person has no legitimate claim on the attention or respect of others; her judgments will be dismissed immediately as lacking credibility and so undeserving of serious consideration. (See my discussion of credibility here.) Only if we believe that a person is committed to the standards of correct reasoning will we take her judgments seriously enough to merit examination. Otherwise, her judgments will fly out of our consciousness as quickly as they flew into it.
To say that every cognitive judgment involves a value judgment—a commitment to the value of truth and to the canons of correct reasoning—is also to say that every cognitive judgment entails an “ought” judgment, if only implicitly. This ought judgment appears on two levels—one internal, the other external. Internally, when we commit to the value of truth, this entails that we ought to learn and employ the standards of correct reasoning so we can attain the desired value of truth. Externally, when we argue with another person we usually think that our interlocutor, having been presented with what we regard as rational and compelling arguments, ought to agree with us. Of course, fulfillment of the latter expectation is frequently the exception rather than the rule (for reasons I cannot explore here). Nevertheless, the very possibility of rational argumentation is predicated on the belief in a common system of cognitive values. This is the indispensable normative foundation of a rational community.