John Rawls bases much of A Theory of Justice on claims about how rationally self-interested agents might design a society if they knew that they would soon inhabit it, but if they did not know their eventual place within it. Such agents would know basic facts about human life, but not their specific place in the world. He termed this constraint the veil of ignorance.
Agents would first find a decisionmaking criterion by which to judge institutional systems. Next they would judge these systems in turn. After that, they would be placed into the winning system. Rawls argues that they would settle on a set of equal and relatively extensive noneconomic liberties very early on. He then tackles justice in property holdings, and here is where he tends to set libertarians’ hair on fire.
It seems to me there are four bases for judging societies’ distribution of material goods that might be advanced in the initial position. Remember, we’re not talking about specific institutions here, but only the methods we should use to judge them. We’re also not talking about noneconomic liberties, which again, Rawls holds are not to be infringed.
At that level of abstraction, I find that the method of judging we pick turns mostly on what we’re supposedly ignorant about.
1. First consider our old friend, utilitarianism. In the original position, we consider the bundles of goods that we attach to various social stations. It would be natural to ask about these stations’ frequencies. We could then play the odds and try to maximize our expected payoff, because that’s what a rationally self-interested actor would do.
Indeed, if we knew both the different shares of goods to be enjoyed and also their frequencies in various types of society, and if we are only ignorant about our eventual place in the distribution, then this strategy seems the only sensible approach. In this month’s Cato Unbound, David Friedman asks whether it is better “to have a world where everyone is at a utility level of a hundred [or] a world with one person at ninety-nine and everyone else at a thousand.” Clearly, the latter gives a larger expected payoff and should be compelling to anyone playing the odds.
If this strategy is not compelling, then you’re probably avoiding it for a reason not given in the original assumptions. If you’re worried, for example, that the folks with 1,000 would use their holdings to turn the one guy with 99 into a slave, then you are worried about basic liberties, not the distribution of material goods. This is a legitimate concern, but it’s also one we have already answered. It may help to consider a different set of distributions: In (a), all people in the world have 100; in (b), all people in the world have 101, save for one individual who has 99. The danger of slavery is minimized, and one system clearly offers a greater expected payoff than the other.
Only the staunchest egalitarian would pick (a); the rest of us would pick (b). I know I would. All of which is to say that utilitarian reasoning can sometimes appeal to almost anyone.
Rawls, however, doesn’t let us have this option. His version of the veil of ignorance does not permit knowledge about the relative frequencies of different sets of holdings. This makes utilitarianism an impossibility, though it’s never clear to me why this should be so.
2. Consider next that we might be ignorant about what makes people happy. If so, then we may want to discover this information. Some have claimed that the free market discovers it; the result looks like Herbert Spencer or Ludwig von Mises. Both were influenced by utilitarianism, but neither was strictly speaking a utilitarian. Neither thought we had the information to pull it off.
Mises in particular argued that economic activity is a process of discovering what our needs are and how best to satisfy them. The answers are always provisional and subjective; it is hubris to expect agents in the original position — agents who are not economic actors at all — to be good at distributing resources. Instead, they should agree to free enterprise and free exchange, then enter society and start learning. Even when we consider essential goods like healthcare and food, our interests still diverge. That a good is essential doesn’t mean that there is one best way of supplying it; even the existence of a one best way does not mean that we know what it is.
3. Next consider that we don’t know the relative frequencies of the different positions in society. Under (3), we may or may not know what makes people happy. Weirdly, it doesn’t seem to matter. Given (3), we know only that some position (possibly with a whole lot of people in it) is the least well-off. If we don’t know its frequency, then suddenly the chance that we might end up within it will command more of our attention.
This purportedly yields the maximin strategy advocated by Rawls, in which differences in material holdings are tolerated only insofar as they increase the absolute welfare of the least well-off. If we don’t know the odds, we should play it safe.
But I don’t think this inference holds up. I say this because agents in the initial position must also consider the number of different economic stations that they would ordain. (3) says that the relative frequencies are totally unknown for all of them. But if it holds, then the smart thing to do would be to ordain a larger number of stations, each of which is still of an unknown frequency. Nothing forbids us from doing so, and if there are one billion different stations in our society, and if only one of them is the least well-off, then it’s not clear why we are to be especially concerned with it.
Any attempt to deny this line of reasoning rests ultimately on a determination that some of these ordained social stations are ad hoc, crafted only for the purpose of artificially altering the odds. But that implies that someone knows something about the initial odds, in order to be able to dismiss my attempt. Oddsmaking seeps in despite our best efforts to keep it out, and (3) collapses into either (1) or (2).
4. Imagine that your worst enemy is to choose your place for you. In an original position governed by (4), we may be very knowledgeable about nearly everything. But none of it matters, because the guy making the choices just happens to hate us.
Of all the choices, (4) most clearly yields Rawls’ justice as fairness: Self-interested people will care about nothing except the lot of the least well-off, because that lot will surely be their own. But it’s not at all clear that we should call this setup a position of ignorance. It seems if anything that the agents know a bit too much.
It’s also not clear why the real world should be governed by the fear of a purely speculative grudge. In real societies, we are usually at pains to prevent an arbitrary personal enmity from affecting anyone’s lot in life. And why not? Because that would be morally atrocious.
Rather than being chosen by one’s worst enemies, one’s economic holdings seem to be chosen — if that’s the right word — by a committee consisting of oneself, plus one’s parents, teachers, co-workers, neighbors, some random strangers, and a roll of the dice. Agents in the initial position should be aware of this fact, which has typically obtained in all societies to date, even very unfree ones. To my mind it seems a basic fact about how humans have always lived. Given this fact, we might still want to constrain that committee’s menu of possible choices. And that’s perfectly allowed. Doing so, however, might look a lot more like (1) or (2), and not at all like (4).