Paul Krugman’s Nostalgianomics: Economic Policies, Social Norms, and Income Inequality
What accounts for the rise in income inequality since the 1970s? According to most economists, the answer lies in structural changes in the economy— in particular, technological changes that have raised the demand for highly skilled workers and thereby boosted their pay. Opposing this prevailing view, however, is Princeton economist and New York Times columnist Paul Krugman, winner of the 2008 Nobel Prize in economics. According to Krugman and a group of like‐minded scholars, structural explanations of inequality are inadequate. They argue instead that changes in economic policies and social norms have played a major role in the widening of the income distribution.
Krugman and company have a point. For the quarter century or so after World War II, incomes were much more compressed than they are today. Since then, American society has experienced major changes in both political economy and cultural values. And both economic logic and empirical evidence provide reasons for concluding that those changes have helped to restrain low‐end income growth while accelerating growth at the top of the income scale.
However, Krugman and his colleagues offer a highly selective and misleading account of the relevant changes. Looking back at the early postwar decades, they cherry‐pick the historical record in a way that allows them to portray that time as an enlightened period of well‐designed economic policies and healthy social norms. Such a rosy‐colored view of the past fails as objective historical analysis. Instead, it amounts to ideologically motivated nostalgia.
Once those bygone policies and norms are seen in their totality, it should be clear that nostalgia for them is misplaced. The political economy of the early postwar decades, while it generated impressive results under the peculiar conditions of the time, is totally unsuited to serve as a model for 21st‐century policymakers. And as to the social attitudes and values that undergirded that political economy, it is frankly astonishing that self‐described progressives could find them attractive.
“The America I grew up in was a relatively equal middle‐class society. Over the past generation, however, the country has returned to Gilded Age levels of inequality.”
So sighs Paul Krugman, the Nobel Prize– winning Princeton economist and New York Times columnist, in his recent book The Conscience of a Liberal. The sentiment is nothing new: political progressives like Krugman have been decrying the trend toward greater income inequality for many years now.
Yet Krugman has added a novel twist to the long‐running inequality debate. In seeking explanations for the widening spread of incomes since the 1970s, researchers have focused overwhelmingly on broad structural changes in the economy: in particular, technological change, demographic shifts, and the rise of “winner‐take‐all” or “superstar” markets. But Krugman argues that these structural explanations are insufficient. Instead, or at least in addition, he points the finger at politics. “Since the 1970s,” according to Krugman, “norms and institutions in the United States have changed in ways that either encouraged or permitted sharply higher inequality. Where, however, did the change in norms and institutions come from? The answer appears to be politics.”
To understand Krugman’s argument, we can’t just start in the 1970s. Instead, we have to back up to the 1930s and ’40s—when, he contends, the “norms and institutions” that shaped a more egalitarian society were created. “The middle‐class America of my youth,” Krugman writes, “is best thought of not as the normal state of our society, but as an interregnum between Gilded Ages. America before 1930 was a society in which a small number of very rich people controlled a large share of the nation’s wealth.” But then came the twin convulsions of the Great Depression and World War II, and the country that arose out of those trials was a very different place. “Middle‐class America didn’t emerge by accident. It was created by what has been called the Great Compression of incomes that took place during World War II, and sustained for a generation by social norms that favored equality, strong labor unions and progressive taxation.”
The “Great Compression” to which Krugman refers is a term coined by economists Claudia Goldin of Harvard and Robert Margo of the Massachusetts Institute of Technology to describe the dramatic narrowing of the nation’s wage structure during the 1940s. According to Krugman, the real wages of manufacturing workers jumped 67 percent between 1929 and 1947, while the top 1 percent of earners saw a 17 percent drop in real income. These egalitarian trends can be attributed to the exceptional circumstances of the period: precipitous declines at the top end of the income spectrum due to economic cataclysm; wartime wage controls that tended to compress wage rates; rapid growth in the demand for low‐skilled labor, combined with the labor shortages of the war years; and rapid growth in the relative supply of skilled workers as high school graduation rates roughly doubled from 29 percent in 1930 to 57 percent in 1950.
But here’s the puzzle: the return to peacetime and prosperity did not result in a shift back toward the status quo ante. Instead, the new, more egalitarian income structure persisted for decades. Why? “This persistence,” Krugman argues, “makes a strong case that anonymous market forces are less decisive than Economics 101 teaches.”In support of this claim, he cites economists Thomas Piketty of the Paris School of Economics and Emmanuel Saez of the University of California at Berkeley, authors of widely discussed studies of changes in the income distribution. According to Piketty and Saez, “this pattern of evolution of inequality is additional indirect evidence that nonmarket mechanisms such as labor market institutions and social norms regarding inequality may play a role in setting compensation.”
What were the egalitarian institutions and norms that supposedly held income extremes in check? Here Krugman leans heavily on a paper by MIT economists Frank Levy and Peter Temin. They argue that postwar American history has been a tale of two widely divergent systems of political economy. First came the “Treaty of Detroit,” characterized by heavy unionization of industry, steeply progressive taxation, and a high minimum wage. Under that system, median wages kept pace with the economy’s overall productivity growth, and incomes at the lower end of the scale grew faster than those at the top. Beginning around 1980, though, the Treaty of Detroit gave way to the free‐market “Washington Consensus.” Tax rates on high earners fell sharply, the real value of the minimum wage declined, and private‐ sector unionism collapsed. As a result, most workers’ incomes failed to share in overall productivity gains while the highest earners had a field day.
This revisionist account of the fall and rise of income inequality has important implications for public policy. Under the conventional view, rising inequality since the 1970s has been understood as a side effect of economic progress—namely, continuing technological breakthroughs, especially in communications and information technology. Consequently, when economists have supported measures to remedy inequality, they have typically shied away from structural changes in market institutions. Rather, they have endorsed more income redistribution to reduce post‐tax income differences as well as various social policies designed to raise the skill levels of lower‐paid workers (e.g., remedial education and job retraining programs).
By contrast, Krugman and his fellow revisionists see the rise of inequality as a consequence of economic regress—in particular, the abandonment of well‐designed economic institutions and healthy social norms that promoted widely shared prosperity. Such an assessment leads to the conclusion that we ought to revive the institutions and norms of Paul Krugman’s boyhood—in broad spirit at least, if not in every specific detail. I suggest therefore that “nostalgianomics” is a handy term for this revisionist challenge to prevailing interpretations of inequality.
So, what to make of nostalgianomics? Let me start by giving its proponents their due. Their historical argument makes a good deal of sense—at least as a supplement to, if not a wholesale substitute for, conventional accounts. As I will review, there is good evidence that changes in economic policies and social norms have contributed to a widening of the income distribution. Blinkered by ideological commitments, however, the partisans of nostalgianomics give a highly selective account of what the relevant policies and norms actually were and how exactly they changed.
Once those bygone policies and norms are seen in their totality, it should be clear that nostalgia for them is misplaced. The political economy of the early postwar decades, while it generated impressive results under the peculiar conditions of the time, is totally unsuited to serve as a model for 21st‐century policymakers. And as to the social attitudes and values that undergirded that political economy, it is frankly astonishing that self‐described progressives could find them attractive.
Specifically, the economic system that Levy and Temin call the Treaty of Detroit was built on extensive cartelization of markets, limiting competition to favor producer welfare over consumer welfare. And those restrictions on competition were buttressed by entrenched prejudices concerning race and the role of women in society, as well as the prevailing postwar conformism of the “Organization Man.” Those social norms were swept away in the cultural tumults of the 1960s and ’70s, as more liberal and individualistic values displaced traditional mores. Restrictions on competition in product and capital markets were then substantially reduced during the 1970s and ’80s, to the applause of economists across the ideological spectrum. These salutary developments may have contributed to the rise of income inequality, true enough. But only through the distorting lens of nostalgia can what came before be seen as the “good old days.”
Serious and challenging issues are raised by increased economic inequality, but the nostalgianomics of Krugman et al. obscures rather than clarifies them. A gauzy sentimentalism about the lost world of one’s childhood is an understandable temptation as we age—but it has no place in sound social science or policy analysis.
Income Inequality’s Many Causes
At issue here is the rise in income inequality since the 1970s. Let’s start by clarifying exactly what that means.
First of all, it means we aren’t talking about wealth inequality. To be sure, that is another important element of differences in overall economic well‐being. And, obviously, wealth inequality can contribute to income inequality, and vice versa. But the trends in wealth inequality and income inequality have been quite different. According to economists Emmanuel Saez of Berkeley and Wojciech Kopczuk of Columbia University, wealth inequality (at least as measured by the share of total wealth held by the wealthiest 1 percent of Americans) has trended up slightly since the 1970s, but wealth remains considerably less concentrated than it was during the 1950s and ’60s—and dramatically less so than back in the 1920s.
Even when we restrict our focus to income differences, we still have a lot of sorting out to do. How do we go about measuring those differences? The most comprehensive measure is the so‐called Gini coefficient, which quantifies the deviation of the overall income distribution from equal incomes for all. Trends in the Gini coefficient, however, do not tell us about how particular groups within society are doing. For example, overall income inequality could be rising because incomes at the bottom are stagnant or falling, or because incomes at the top are soaring, or because of a decline in the number of people in the middle ranks. To get a more detailed look at income trends, we often look at the share of total income accounted for by particular segments of the income distribution: for example, the bottom quintile, the top 10 percent, the top 1 percent, etc. Or we compare ratios of incomes: for example, 90/10 (income in the 90th percentile of the distribution compared to income in the 10th percentile), 95/50, or 50/10. In addition, we might want to look at other ratios—black income to white income, say, or female income to male income—or at Gini coefficients for particular groups within society (e.g., income inequality among blacks or among women). For something as complex as changes in the pattern of millions of incomes over time, no single measure is capable of revealing the whole picture. And over a given period of time, some indicators of inequality may be rising while others are holding steady or falling.
Consequently, when we talk about a rise in income inequality, pinning down what we mean by “inequality” is actually fairly tricky. And pinning down what we mean by “income” turns out to be tricky as well. Income includes not only money earned in the workplace or from investments, but also employee benefits (e.g., health insurance) and government transfers (e.g., Social Security checks). Also, of course, pre‐tax income and post‐tax income are two very different things. If we focus only on workplace earnings, or only on money income, or if we exclude the effects of taxes and government transfers, we’re not getting the complete picture.
Furthermore, income statistics are kept on an annual basis, but people’s incomes tend to fluctuate from year to year. If we measure people’s income over a longer period, differences in incomes will appear somewhat reduced as those year‐to‐year differences get washed out. If temporary fluctuations in people’s income, also known as income volatility, are increasing over time, income inequality will appear to rise even when there is no change in the pattern of incomes over the longer term. Indeed, according to a paper by Peter Gottschalk of Boston College and Robert Moffitt of Brown University, some one‐third of the measured increase in earnings inequality between the periods 1970–78 and 1979–87 is due to an increase in the volatility of earnings.
An additional complication arises because the size of the economic unit whose income is being measured has changed over time. We typically measure the annual income of “households,” but with increasing numbers of single‐parent families and people living alone, the size of the average American household has shrunk since the 1970s. As a result, a significant portion of the rise in measured income inequality is due to these demographic changes rather than to any changes in the workplace. We can get around this wrinkle by looking at wage inequality—although, of course, trends in individual earnings miss the fact that people actually do live their economic lives as members of households (in particular, they combine their earnings with those of their spouses).
In view of all these complications, what can we say about changes in the pattern of American incomes? Here I want to look at pretax income, as my focus is on divergent trends in the way the marketplace rewards work and the underlying causes of those trends. First, overall income inequality as measured by the Gini coefficient is up since the 1970s: from 0.395 in 1974 to 0.470 in 2006. Over that same period, the 95/50 household income ratio (household income at the 95th percentile compared to median household income) rose from 2.73 to 3.61—an increase of nearly a third. Looking at wages as opposed to household income tells a similar story. According to an analysis done by Terry Fitzgerald of the Federal Reserve Bank of Minneapolis, between 1975 and 2005 real wages at the 90th and 95th percentiles grew twice as fast as median wages.
Meanwhile, earnings at the very top of the scale have grown by leaps and bounds over the past generation. Often this phenomenon is discussed in terms of the growing share of total income accounted for by the top 1 percent, or top 0.1 percent, of earners. All of these statistics, though, are based on income tax data; and income reported for tax purposes, especially by high‐income taxpayers, is extremely susceptible to changes in the tax code. Thus, analysis that track incomes over decades and multiple major changes in tax policy are highly problematic. Nevertheless, anecdotal evidence abounds that top performers in their fields—in sports and entertainment as well as business—have enjoyed huge gains in recent decades. To take one particular example, I recently re‐read Jerry Kramer’s account of his 1967 season as right guard for the Green Bay Packers. Coming off an appearance in the Pro Bowl, he earned $27,500 that season (or about $171,000 in 2007 dollars). By comparison, the 2007 salaries of the guards who started in the previous Pro Bowl averaged $3,187,000. That works out to more than an 18‐fold increase over 40 years; by comparison, median real household income rose only 29 percent between 1967 and 2007, and even real income at the 95th percentile increased only 72 percent.
The nation’s corporate elite has profited in similarly spectacular fashion. According to a long‐term historical study of top corporate executives in large publicly traded firms conducted by Carola Frydman of the MIT Sloan School of Management and Raven Saks of the U.S. Federal Reserve, median compensation (in 2000 dollars) averaged $930,000 during the 1970s, but jumped all the way to $4.08 million in 2000–2005. Between 1970 and 2005, the ratio of median executive compensation to average earnings per full‐time‐equivalent worker rose from under 30 to 110. There is now enormous scholarly as well as popular literature on the possible causes of increased income and wage inequality. Without attempting any kind of comprehensive synthesis of that literature here, I will briefly describe the most plausible structural explanations currently on offer—that is, explanations that link changes in the income distribution to broad changes in the economy and the workforce.
The leading explanation that emerges from the literature is one of “skill‐biased technical change” (SBTC). The idea here is that, with the explosive growth of information technology in recent decades, rising relative demand for highly skilled “knowledge workers” has resulted in a growing pay gap between those workers and their less‐skilled counterparts. An important refinement of the SBTC hypothesis emphasizes not only rising relative demand growth for skilled workers but also lagging relative supply growth. According to Harvard economists Claudia Goldin and Lawrence Katz, college‐educated workers as a percentage of the total workforce increased by only 2.0 percent a year from 1980 to 2005—down from the 3.8 percent increase per year between 1960 and 1980. The result of these interactions of supply and demand has been a big increase in the “college wage premium.” Goldin and Katz estimate that the difference between the average weekly wages of college graduates and those of high school graduates rose from 24 percent in 1979 to 63 percent in 2005.
An obvious challenge for the SBTC hypothesis is the fact that income inequality within skill groups has also been on the increase. Of course, unobserved skills may vary within those groups as well. But Thomas Lemieux of the University of British Columbia contends that the major explanation of so‐called “residual” inequality is another structural change: namely, the changing demographics of the American workforce. Americans today are both older and better educated than they were in the 1970s, and it turns out that income dispersion increases with both age and education—which makes sense. Younger and less‐skilled workers tend to be concentrated at the low end of the pay scale, while older and highly educated workers can fan out over a much broader range. According to Lemieux’s estimates, the overwhelming bulk of the increase in residual inequality can be explained by these “composition effects.”
Another distinctive aspect of the inequality picture that seems to need special explanation is the whopping increase in incomes at the very top of the pay scale. How to explain the rise of so‐called “superstar” markets? One explanation goes back to technology: people will pay top dollar to see the very best performers; and as technology (e.g., television, the Internet) expands the audience that those performers can reach, top performers profit accordingly. But in other cases (e.g., chief executive officers, investment bankers, elite lawyers), big increases in remuneration have come without any corresponding expansion of the “audience” or customer base.
Patterns of wages and incomes are extremely complex phenomena that reflect the confluence of untold millions of factors. Not surprisingly, the leading structural explanations of increased income inequality are less than fully satisfactory. Even if these explanations are correct as far as they go, they are doubtless incomplete. In other words, while the evidence does support a strong connection between structural changes in the economy and a wider income distribution, almost certainly other factors are at play as well.
Paul Krugman and company are well advised to search for additional causal connections in the realms of politics and culture. The fact that dramatic changes in both public policy and social norms have coincided with a marked change in income trends suggests that links between the two are a possibility worth considering.
What in particular should they consider? For changes in economic policies and social norms to contribute to the rise in income inequality, the changes would need to either restrain wage and income growth at the lower end of the income spectrum or accelerate its growth at the top end. Which is to say, they should look for changes that intensified (a) competition among less‐skilled workers for employment and/or (b) competition among employers for highly skilled workers.
When put that way, and knowing a little history, it quickly becomes apparent that changes in economic policies and social norms very likely have made significant contributions to the rise in income inequality. As I will detail below, the political economy of the postwar decades was characterized by a general suppression of competitive forces—a suppression that was aided in important ways by prevailing social norms. The changes in economic policies that have occurred in recent decades, buttressed by parallel changes in social norms, have worked to intensify competitive pressures in wide‐ranging ways. And in so doing, they worked in concert with changes in economic structure to stretch the income distribution.
The economic system that emerged from the New Deal and World War II was markedly different from the one that exists today. And the contrast between past and present is highlighted when we focus on one critical dimension: the degree to which competition is either encouraged or thwarted by public policy.
The postwar economic order was characterized by a host of laws and regulatory institutions that systematically limited competition— not only in product markets, but in capital markets and labor markets as well. The policies identified by Levy and Temin as key provisions of the Treaty of Detroit—highly progressive tax rates, a relatively high minimum wage, and a strong policy bias in favor of unionization and collective bargaining—were certainly part of the story. But the story went far beyond these limited elements.
Let’s begin by looking at product markets. First and most obviously, the transportation, energy, and communications sectors were subject to pervasive price and entry regulation. Railroad rates and service had been under federal control since the Insterstate Commerce Act of 1887, but the Motor Carrier Act of 1935 extended the Interstate Commerce Commission’s regulatory authority to cover trucking and bus lines as well. In 1938, the Civil Aeronautics Act put airline routes and fares under the control of the Civil Aeronautics Board. After the discovery of the East Texas oil field in 1930, the Texas Railroad Commission acquired the effective authority to regulate the nation’s oil production in the name of maintaining price stability. Rates for the interstate transmission of natural gas were regulated by the Federal Power Commission under the Natural Gas Act of 1938, and a 1954 Supreme Court case expanded the commission’s control to include setting wellhead prices for natural gas. The Federal Communications Commission, created by the Communications Act of 1934, allocated licenses to radio and later television broadcasters and also regulated the rates charged by the AT&T telephone monopoly. Its regulatory power also extended to the emerging cable television and microwave‐based long‐distance telephone industries.
Limits on competition in product markets weren’t restricted to these so‐called “regulated industries.” Beginning with the Agricultural Adjustment Act of 1934, the prices and production levels of a wide variety of farm products were controlled by a byzantine complex of federal controls and subsidies. Manufacturers and other goods producers were shielded from international competition, not just by the devastation of World War II, but by high import tariffs. Although rates fell during the 1930s and ’40s from their dizzying Smoot‐Hawley highs, average tariffs on dutiable goods remained above 10 percent throughout the 1950s and ’60s. And in the retail sector, aggressive discounting was countered by state‐level “fair trade laws,” which allowed manufacturers to impose minimum resale prices even on non consenting distributors.
Comprehensive regulation of the U.S. financial sector in the wake of the Great Depression served to restrict competition in capital markets in a variety of ways. The Glass‐ Steagall Act, part of the Banking Act of 1933, erected a wall between commercial and investment banking, thereby effectively brokering a market‐sharing agreement under which commercial banks and investment banks were protected from each other. The McFadden Act of 1927 added a federal ban on interstate branch banking to widespread state‐level restrictions on intrastate branching. “Regulation Q,” instituted under authority of the Banking Act of 1933, prohibited interest payments on demand deposits and set interest rate ceilings for time deposits. Provisions of the Securities Act of 1933 worked to limit competition in underwriting by outlawing pre‐offering solicitations and undisclosed discounts. These and other restrictions amounted to a significant dose of financial repression—or the artificial stunting of the depth and development of capital markets. Consider, for example, the striking fact that the ratio of U.S. stock market capitalization to gross domestic product fell from 0.75 in 1929 to 0.33 in 1950—and did not reach and then surpass the 1929 mark until the 1990s. The relative underdevelopment of the financial sector, meanwhile, likely muted the intensity of competition throughout the larger “real” economy. New entrants are much more dependent on a well‐developed financial system than are established firms, since incumbents can self finance through retained earnings or use existing assets as collateral. It follows, therefore, that a hobbled, less‐competitive financial sector acts as a barrier to entry and thereby reduces established firms’ vulnerability to competition from entrepreneurial upstarts.
The highly progressive tax structure of the early postwar decades may have further dampened competitive forces throughout the economy by discouraging entrepreneurship. The top marginal income tax rate shot up from 25 percent to 63 percent under Herbert Hoover in 1932, climbed as high as 94 percent during World War II, and stayed at 91 percent during most of the 1950s and early ’60s. In theory, the effects of progressive rates on the decision to become an entrepreneur can cut both ways. On the one hand, by reducing the risk of income shocks, progressive rates could act as a kind of income insurance policy that encourages potential entrepreneurs to be less risk averse. Also, higher rates could increase the value of the tax avoidance opportunities created by business ownership or self‐employment. Empirical research by economists William Gentry of Williams College and Glenn Hubbard of Columbia University, however, finds that these possible benefits are more than outweighed by the costs imposed by progressive rates. In their analysis, a progressive rate structure acts as a “success tax” that reduces the upside of possible entrepreneurial ventures relative to the wages of continued employment. The result is to discourage possible entrepreneurs from striking out on their own.
Finally, competition in labor markets was subject to important restraints during the early postwar decades. Levy and Temin have identified two of those restraints: government encouragement of unionization and collective bargaining, and the imposition of an above market minimum wage. The Wagner Act of 1935 provided a major boost to the surging industrial unionism movement. Membership in labor unions as a percentage of total nonagricultural employment (otherwise known as union density) jumped from 13 percent the year the law was passed to 28 percent in 1938, more than doubling in just three years. During World War II, the strongly pro‐union tilt of the National War Labor Board helped to push union density above 30 percent, where it remained until the early 1960s. In particular, roughly three‐quarters of blue‐collar workers belonged to unions during this period.
The triumph of collective bargaining meant the active suppression of wage competition in covered industries—especially through the practice of “pattern bargaining,” in which a labor agreement negotiated with one target employer becomes the model for a whole industry. And in the interest of boosting wages, unions sometimes worked to restrict competition in their industries’ product markets as well. Thus, garment unions connived with trade associations to set prices and allocate production among clothes makers. And coal unions attempted to regulate production by dictating how many days a week mines could be open.
Meanwhile, a relatively high federal minimum wage imposed another significant restriction on wage competition. Established under the Fair Labor Standards Act of 1938, the federal minimum wage was originally set at $0.25 per hour. According to Levy and Temin, that wage floor equaled 27 percent of average output per hour in the nonfarm business sector. Throughout the 1950s and ’60s, that ratio of minimum wage to average hourly output generally stayed between 25 and 30 percent. As a result, workers whose skills were worth less than this threshold were priced out of the labor market in industries subject to the wage floor.
Although not mentioned by Levy and Temin, highly restrictive immigration policies provided another significant brake on labor market competition. With the establishment of country‐specific immigration quotas under the Immigration Act of 1924, foreign‐born residents of the United States plummeted from 13 percent of the total population in 1920 to 5 percent by 1970. As a result, competition at the less‐skilled end of the U.S. labor market was substantially reduced compared to what would have been the case if liberal immigration policies had continued.
Interestingly, the immigration quota system did not apply to the western hemisphere. Consequently, inflows of workers from Mexico surged as restrictions on the rest of the world tightened. Between 1907 (when the “Gentlemen’s Agreement” between the United States and Japan stopped emigration from the latter country) and 1929, the number of Mexican born residents of the United States soared from 178,000 to 739,000. The Great Depression, however, precipitated a harsh crackdown: between 1929 and 1937, the Mexican population was halved as some 458,000 Mexicans— including native‐born children who were U.S. citizens—were arrested and summarily deported. Labor shortages during World War II brought on another wave of Mexican immigration— some of which occurred officially under the “Bracero Program” for temporary agricultural workers, but much of which did not. Another crackdown followed, as “Operation Wetback” apprehended over one million Mexicans in 1954 alone. Through the rest of the 1950s, the Bracero Program was expanded to permit between 400,000 and 450,000 immigrants a year, and unofficial immigration subsided. Through these cycles of openness and repression, wage competition from Mexicans was held more or less in check during the early postwar decades.
The political economy of the early postwar decades was thus distinguished by its system of extensive and mutually supporting restrictions on competition. Though some of that system dated back as far as the late 19th century, most of it was slapped together during the frenzied improvisations of the 1930s. The New Deal was the product of many conflicting impulses, but one clear theme was a push to limit competition—and, in particular, to protect incumbents from outside challengers— in virtually every sector of the economy. As the historian Ellis Hawley concluded in his landmark study of the New Deal:
Most New Deal planning was in the nature of government cartelization. It came at the behest of organized economic interest groups intent upon strengthening their market position through legal sanctions or government supports.… And its purpose, although this was often disguised as something else, was to help individual industries or particularistic pressure groups to promote scarcity and thus balance their output with demand, regardless of the dislocations that such action might bring in other areas of the economy.
The drive toward cartelization began with a comprehensive approach: the National Industrial Recovery Act of 1933. When that act’s system of industrial codes foundered on conflicts among rival interest groups and then was toppled by the Supreme Court, a patchwork arrangement of industry‐specific production and price controls took its place. Combined with price supports and production limits in agriculture, far‐reaching financial regulation, pro‐union labor legislation, and a dramatic increase in top marginal tax rates, the overall effect was to imbue American economic institutions with a decided tilt in favor of established producer interests (including producers of capital and labor as well as goods) at the expense of consumer welfare. That tilt would persevere until the 1970s.
The catastrophe of the Great Depression was the primary catalyst for the shift toward cartelization. It was widely believed at the time that the economic collapse had been brought about by unsustainable overproduction and consequent falling prices. Accordingly, restricting output and propping up prices seemed like the obvious strategy for reversing the downward spiral. More broadly, since the late 19th century, the clear trend in elite opinion had been toward the view that unregulated markets were an anachronism and that modern conditions required broad government control of economic life. The spectacular implosion of the market economy seemed to provide decisive vindication for such thinking. As the prominent New Dealer Rexford Tugwell put it: “The jig is up. The cat is out of the bag. There is no invisible hand. There never was.… We must now supply a real and visible guiding hand to do the task which that mythical, nonexistent agency was supposed to perform, but never did.” In such an intellectual climate, the conditions were highly favorable for industries and other economic interest groups to justify limits on competition in the name of the public interest.
Solidarity and Chauvinism
The anti‐competitive effects of the Treaty of Detroit were reinforced by the prevailing social norms of the early postwar decades. Just as there were clear and relevant differences between the policies of that era and those of today, likewise the values and attitudes that dominated during that time stand in distinct contrast to contemporary norms.
Krugman and company focus in particular on changing norms with respect to executive pay. Krugman quotes wistfully from John Kenneth Galbraith’s characterization of the corporate elite in his 1967 book The New Industrial State: “Management does not go out ruthlessly to reward itself—a sound management is expected to exercise restraint.” According to Krugman, “For a generation after World War II, fear of outrage kept executive salaries in check. Now the outrage is gone. That is, the explosion in executive pay represents a social change … like the sexual revolution of the 1960’s—a relaxation of old strictures, a new permissiveness, but in this case the permissiveness is financial rather than sexual.”
Krugman is on to something. But changing attitudes about the seemliness of lavish compensation packages are just one small part of a much bigger picture. In a whole host of wide‐ranging ways, American cultural values have undergone dramatic shifts since World War II. Of particular relevance to growing income inequality, during the early postwar decades, the combination of in group solidarity and out‐group hostility was much more pronounced in certain key dimensions. By contrast, contemporary culture is decidedly more individualistic, so that loyalty to other members of the same group and discrimination against outsiders have both weakened.
The more group‐minded mores of the “Treaty of Detroit” era worked in concert with the economic policies of the time to keep competition in check. First, the prevailing racism of the era supported the restrictive immigration policies that excluded foreign born workers from competing for jobs in the U.S. labor market. In addition, traditional attitudes about the role of women in society suppressed female labor force participation and kept women from competing in a wide variety of jobs considered to be men’s work. Furthermore, a distinctive social ethos that flourished in the years after World War II fostered a sense of solidarity within business enterprises that probably restrained competition among enterprises for top talent.
Consider, first of all, the transformation in attitudes about race. Open and unapologetic discrimination by whites against other ethnic groups was widespread and socially acceptable in the America of Paul Krugman’s boyhood; it no longer is today. Contrast the oppression of Jim Crow with the affirmative action policies of the past generation; compare the mass internment of Japanese‐ Americans during World War II with the extreme hesitancy to engage in anything that looked like “racial profiling” of Muslims in the wake of 9/11.
How does racial progress fit into the story of income inequality? Not the way we might expect. Of course, the most dramatic manifestation of that progress was the dismantling of institutionalized discrimination against African Americans during the 1960s. More relevant to the rise in income inequality, though, was the fact that more enlightened attitudes about race also encouraged a major reversal in the nation’s immigration policies. The effect of that reversal has been to increase considerably the number of less skilled workers and thereby intensify competition among them for employment.
Under the system that existed between 1924 and 1965, immigration quotas were set for different countries on the basis of the percentage of people with that national origin already living in the country (though immigration from East and South Asia was banned outright until 1952). The explicit purpose of the national‐origin quotas was to freeze the ethnic composition of the United States—that is, to preserve white Protestant supremacy and protect the country from “undesirable” races. “Unquestionably, there are fine human beings in all parts of the world,” said Senator Robert Byrd in defense of the quota system during the debates on the 1965 immigration act, “but people do differ widely in their social habits, their levels of ambition, their mechanical aptitudes, their inherited ability and intelligence, their moral traditions, and their capacity for maintaining stable governments.”
But the times had passed the former Klansman by. With the triumph of the civil rights movement, official discrimination on the basis of national origin was no longer sustainable. Just two months after signing the Voting Rights Act, President Lyndon Johnson signed the Immigration and Nationality Act of 1965, ending the “un‐American” system of national‐origin quotas and its “twin barriers of prejudice and privilege.”Although neither Johnson nor the bill’s backers in Congress realized it at the time, the act would inaugurate a new era of mass immigration: foreign born residents of the United States have surged from 5 percent of the population in 1970 to 12 percent as of 2005. That influx has expanded significantly the ranks of low skilled workers competing in the American job market.
Changing attitudes about the role of women in society have also served to open up competition in the labor market. Here again, in‐group solidarity (among working males) had expressed itself in a concerted refusal to extend opportunities to (female) outsiders. Just as racism helped to keep foreign‐born workers out of the U.S. labor market, sexism kept women in the kitchen and out of the paid workforce. As of 1950, the labor force participation rate for adult women stood at only 31 percent; by 1970, it had climbed to 42 percent, and as of 2005 it had jumped to 59 percent. Meanwhile, the range of jobs open to women expanded enormously. Prior to the women’s movement of the 1960s and ’70s, working women were largely confined to a “pink collar” ghetto consisting of teaching, nursing, and secretarial and clerical jobs. Elite, high paying managerial and professional occupations were almost completely off limits.
Racism and sexism are two ancient and widespread forms of group identity. Another form, more in line with what Paul Krugman has in mind, was a distinctive expression of U.S. economic and social development in the middle decades of the 20th century. Here I am talking about the phenomenon that sociologist David Riesman, in his classic 1950 work The Lonely Crowd, described as the “other‐directed personality.” Journalist William Whyte was referring to much the same thing when he wrote about the prevailing “social ethic” in his 1956 book The Organization Man.
Riesman summed up the situation to which the American character was adapting at mid‐century: “Increasingly, other people are the problem, not the material environment.” For one thing, Americans were emptying out of the countryside and small towns and pouring into cities and suburbia. In 1900, 60 percent of Americans lived in rural areas; by 1960, 70 percent of the population was urban. At the same time, more and more people were working, not with nature on farms or with machinery in factories, but with other people in offices. As of 1900, only 18 percent of the workforce was in white‐collar occupations; by 1960, that figure had climbed to 40 percent. Meanwhile, the descendants of the great wave of immigration from the turn of the century were progressively assimilating into the mainstream of American life, and thus large numbers were joining the white‐collar middle class and leaving big‐city ethnic enclaves for the suburbs. Learning how to get along with other people of all kinds of different backgrounds— in the meeting room, at the water indeed a major and novel challenge in the years after World War II.
That challenge was surmounted by the emergence of a new sensibility that defined itself in studied contrast to old‐style “rugged individualism.” The prevailing mores of, in particular, the 1950s put an extraordinary emphasis on fitting into the group and being “well‐adjusted.” When contemporary critics scorned the era for its conformism, they weren’t just talking about the ranch houses and gray flannel suits. “In the Social Ethic I am describing,” wrote Whyte, “man’s obligation is … not so much to the community in a broad sense but to the actual, physical one around him, and the idea that in isolation from it—or active rebellion against it—he might eventually discharge the greater service is little considered.”
A few anecdotes from The Organization Man illustrate Whyte’s point:
“These men do not question the system,” an economics professor says of [college seniors], approvingly. “They want to get in there and lubricate and make them run better. They will be technicians of the society, not innovators.”
One recruiter went through three hundred interviews without one senior’s mentioning salary, and the experience is not unusual. Indeed, sometimes seniors react as if a large income and security were antithetical. As some small companies have found to their amazement, the offer of a sales job netting $15,000 at the end of two years is often turned down in favor of an equivalent one with a large company netting $8,000.
“Any progressive employer,” said one personnel director, “would look askance at the individualist and would be reluctant to instill such thinking in the minds of trainees.”
“We used to look primarily for brilliance,” said one [company] president. “Now that much‐abused word ‘character’ has become very important.… We want a well‐rounded person who can handle well‐rounded people.”
From company to company, trainees express the same impatience. All the great ideas, they explain, have already been discovered and not only in physics and chemistry but in practical fields like engineering. The basic creative work is done, so the man you need—for every kind of job—is a practical, team‐player fellow who will do a good shirtsleeves job. “I would sacrifice brilliance,” one trainee said, “for human understanding every time.”
Times have certainly changed. But although these passages sound jarring to contemporary ears, the sensibility they highlight was doubtless useful in helping people adapt to the new surroundings of an urbanized, highly organized, and culturally pluralistic world. And it seems entirely reasonable to conclude that the prevalence of this social ethic did help to limit competition among business enterprises for top talent. Secure membership in a stable organization was more important relative to maximizing one’s individual position than it is today, and consequently the most talented employees were less vulnerable to the temptation of a better offer elsewhere. Even if they were tempted, a strong sense of organizational loyalty made them more likely to resist and stay put.
The heavy emphasis on group cohesion was further strengthened by the experiences of the Great Depression and World War II. According to social psychologists, our sense of group identity is heightened when membership in that particular group is especially salient—as it is when, say, the group is faced with an external threat. Under those conditions, we feel strong pressures (both external and internal) toward assimilation—that is, reducing the differences among members of the group. By contrast, when a group isn’t faced with some external challenge, the natural tendency is toward differentiation—as everybody seeks some niche in which he has power, influence, and status.
The successive cataclysms of economic collapse and total war engendered a strong sense of shared national identity and resulting group cohesion. We experienced something similar in the wake of the 9/11 terrorist attacks: for weeks and months thereafter the visceral feelings of patriotism and “we’re all in this together” were both powerful and widespread. In the 1930s and ’40s, circumstances conspired to keep national solidarity highly salient, not just for weeks or months, but for 15 years.
Thus, the young office workers and suburbanites of the early postwar years were facing novel social challenges that put a premium on group‐mindedness. And they faced those challenges with outlooks shaped by a historical era uniquely suited to suppressing individualism. It is no wonder that the culture of the 1950s was so strongly marked by an emphasis on fitting in, getting along, and not rocking the boat.
The proponents of nostalgianomics are certainly correct that American political economy has undergone dramatic changes since the 1970s. Let’s start with the economic institutions that Krugman and company focus on: unions, the minimum wage, and income tax rates. Union density, which remained above 30 percent into the 1960s, fell to 12 percent by 2006. In the private sector, only 7 percent of workers belonged to unions by 2006. The federal minimum wage, just before the recent hike, stood at $5.15 in 2006—down nearly 45 percent in real value from its 1968 peak of $9.27 (in 2006 dollars). Annual earnings at the minimum wage, expressed as a fraction of average output per worker, have fallen from the 25–30 percent range during the 1950s and ’60s to under 15 percent in the 2000s. The top income tax rate, which stood at 70 percent as of 1980, was reduced to 50 percent under 1981 legislation, fell all the way to 28 percent under the 1986 Tax Reform Act, rose to 31 percent under a 1990 budget agreement, was increased again to 39.6 percent in 1993, and was reduced to 35 percent in 2001.
These changes were part of a much broader shift toward greater reliance on market competition. Price and entry controls in the airline, trucking, and railroad industries were eliminated. Oil and natural gas prices were deregulated. The AT&T monopoly was broken up, and competition in long‐distance telephone services was permitted. Cable and satellite television were allowed to compete with broadcasting. Interest rates were deregulated, limits on branch banking were lifted, and the Glass‐Steagall wall between commercial and investment banking was lowered. Barriers to international trade have continued to fall, and the trade‐weighted average tariff rate now stands at under 2 percent.
Meanwhile, social norms have been transformed by the cultural upheavals of the 1960s and ’70s. The “social ethic” of the “Organization Man” is long gone, as are a whole host of traditional attitudes about race, sex, and much else besides. As a result, Americans today pursue more individualized conceptions of personal integrity and personal fulfillment. They are less committed to group‐oriented norms like racial solidarity, traditional gender roles, or loyal submission to corporate or other bureaucratic hierarchies.
These dramatic changes in political economy and social norms have brought about a broad‐based intensification of competition in American economic life. Of particular relevance to the distribution of income, the changes in question have simultaneously increased competition among less‐skilled workers for employment and increased competition among employers for highly skilled workers. The logical consequence of such developments should be to hold down low end wages while boosting the earnings of people at the top. In other words, the logical consequence should be to reinforce and amplify the structural changes in the American economy that have increased income inequality.
That, at any rate, is the logic of the situation. What about the facts? A review of empirical economic research does provide evidence that changes in economic policies and social norms have contributed to the rise in income inequality since the 1970s. To be honest, however, much more research is needed before any confident assessment of the size of the contribution can be made.
A number of economists have identified the declining real value of the minimum wage as a significant factor in the rise of income inequality. In particular, the nominal value of the minimum wage stood unchanged at $3.35 an hour from 1981 to 1990, during which time inflation eroded the purchasing power of the dollar by 30 percent. That real decline coincides closely with a sizeable jump in 90/10 and 50/10 income inequality, and statistical analysis supports a connection as well.
An important caveat is in order, however. The fact is that only a small fraction of the workforce earns the minimum wage. As of 1988, during the midst of this sharp decline, only an estimated 6.5 percent of hourly wage workers—or under 4 percent of all wage and salary workers—were being paid the legal minimum. Moreover, a full 36 percent of minimum wage workers at that time were teenagers. It is difficult to see how a policy that directly affects such a small percentage of adult, full‐time workers could have played more than a modest role in boosting inequality.
Solid evidence also shows a connection between declining private‐sector unions and rising inequality. Although unions do apparently reduce inequality, they do not do so by increasing labor’s overall share of national income. True, collective bargaining has resulted in a wage premium of 15 percent or more for unionized workers. However, above‐market wages in organized sectors tend to result in lower employment, and the resulting diversion of labor to other sectors may depress wages there. In any event, labor’s share of national income has held remarkably steady over the decades regardless of the changing fortunes of the labor movement. According to one estimate, that share actually rose slightly from 72 percent to 73 percent between 1950 and 2007.
So how do unions reduce inequality if not by wresting a greater share of the pie for workers? It turns out that the wage structure in unionized sectors is generally more compressed than in nonunionized sectors. In other words, the pay gap between highly skilled and less skilled, or senior and junior, workers is generally smaller when wages are set by collective bargaining. As a result, high union density results in lower income inequality because more workers’ wages are bunched in a relatively narrow band. Focusing on this dynamic, economist David Card of the University of California at Berkeley estimated that the shrinking percentage of unionized workers accounted for 15–20 percent of the rise in overall male wage inequality between the early 1970s and the early 1990s.
But did policy changes play a role in the fall of union power? Yes, they did; but the theories of nostalgianomics notwithstanding, the relevant changes were not in labor law. The only significant reduction in the Wagner Act’s pro‐union bias occurred with the Taft‐Hartley Act, which, among other things, outlawed “closed shops” (contracts requiring employers to hire only union members) and authorized state “right‐to‐work” laws (which ban contracts requiring employees to join unions). But that piece of legislation was enacted in 1947—three years before the original Treaty of Detroit between General Motors and the United Auto Workers. It would be a stretch to argue that the Golden Age ended before it even began.
Scrounging about for a policy explanation for declining unionization, Levy and Temin point to the failure of a 1978 labor law reform bill to survive a Senate filibuster. They might as well have added the failure in any year to pass legislation requiring all employees to be union members. In any event, maintenance of the policy status quo is not a policy change. Levy and Temin, joined by Krugman, also blame President Reagan’s 1981 decision to fire striking air‐traffic controllers as a signal to employers that the government no longer supported labor unions. It is true that Reagan’s handling of that strike, along with his appointments to the National Labor Relations Board, made the policy environment for unions somewhat less favorable. But the effect of those moves was marginal.
As economist Henry Farber of Princeton and sociologist Bruce Western of Harvard have pointed out, the main cause of declining unionization has been a dramatic difference in employment growth between unionized and nonunionized workplaces. Between 1973 and 1998, employment at unionized firms declined on average by 2.9 percent a year, while jobs at nonunion firms grew at an average rate of 2.8 percent a year. To counteract this differential and hold union density constant would have required torrid rates of organizing new workers. Yet organizing rates have been in long‐term decline since the early 1950s. Only organizing rates at early‐1950s levels would have sufficed to prevent the drop in union density experienced since the early 1970s.
Meanwhile, it is important to understand the reasons for the differences in employment growth that are at the root of unions’ problems. One contributing factor is the structural shift in overall employment away from industries in which unionization was historically most prevalent (in particular, manufacturing industries). However, such structural changes are not the main reason for falling union density. Between 1983 and 2002, union density fell from 16.5 percent to 8.6 percent. Yet according to Trinity University economist Barry Hirsch, even if industrial structure had remained unchanged over this period, union density would still have fallen to 10.2 percent. Thus, some 80 percent of the decline in unionization was due to falling union density within industries.
The major reason for the fall in unionized employment, according to Hirsch, “is that union strength developed through the 1950s was gradually eroded by increasingly competitive and dynamic markets.” As he elaborates:
To the extent that high union labor compensation is not offset by greater productivity or higher product prices, union gains can be thought of as a “tax” on firm profits. The competitiveness of the product market affects the ability of the unions to acquire gains for their members. When much of an industry is unionized, firms may prosper with higher union costs as long as their competitors face similar costs. When union companies face low‐cost competitors, labor cost increases cannot be passed through to consumers. Factors that increase the competitiveness of product markets—increased international trade, product market deregulation, and the entry of low‐cost competitors—make it more difficult for union companies to prosper.
Accordingly, the decline of private‐sector unionism was indeed abetted by policy changes. The changes in question, however, were not specific shifts in labor policy, but rather the general reduction of trade barriers and elimination of price and entry controls. With the unleashing of competitive forces under the Washington Consensus, unionized firms, saddled with above‐market wages and restrictive work rules, found themselves at a critical disadvantage. They shrank accordingly, and union rolls along with them. In addition to helping undermine unionization, trade liberalization has also affected inequality directly by reducing relative demand for less‐skilled workers in import competing industries. According to a number of studies done in the 1980s and ’90s, however, the magnitude of the effect appears to have been quite modest. For example, in a 1995 paper, Paul Krugman estimated that trade with poorer countries led to about a 3 percent increase in the ratio of skilled to unskilled wages, and thus accounted for roughly 10 percent of the increase in wage inequality since 1980. (Note that the effect here is for all trade, not just increased trade due to lower barriers.) More recently, though, Krugman has argued that the rise of lowwage China and the increased ability to send offshore specific parts of the production process have magnified the downward pressure of trade on low‐end wages. He concludes, however, that data on trade flows are not sufficiently detailed to permit a reliable quantitative estimate.
The huge wave of immigration over the past generation, the result of a policy change catalyzed by changing social norms, has also exerted mild downward pressure on the wages of native‐born low‐skilled workers. Most estimates show a small effect: at the high end, Harvard economist George Borjas found that immigration between 1980 and 2000 depressed the wages of high‐school dropouts by around 9 percent.
The more dramatic impact of immigration on measured inequality, however, has come from its effect on the composition of the American workforce. Specifically, immigration has substantially increased the number of less‐skilled workers, thereby increasing apparent inequality by depressing average wages at the low end of the income distribution. According to American University and Urban Institute economist Robert Lerman, excluding recent immigrants from the analysis would eliminate roughly 30 percent of the increase in adult male annual earnings inequality between 1979 and 1996.
Interestingly, although the large influx of unskilled immigrants has made American inequality statistics look worse, it has actually reduced inequality for the people involved. After all, immigrants experience large wage gains as a result of relocating to the United States, thereby reducing the wage gap between them and top earners in this country. According to Lerman, if trends in inequality are recalculated to include, at the beginning of the period, recent immigrants and their native‐country wages, a very different picture emerges. Instead of the 90/10 wage ratio increasing by 16.6 percent between 1979 and 1996, it actually fell by 4.7 percent. Thus, the result of immigration has been to reduce human inequality while increasing national inequality.
Changing social norms led, not only to a large infusion of foreign‐born workers into the labor force, but to a large infusion of American women as well. How did the transformation of social attitudes about the role of women in society affect the inequality picture? The massive expansion of opportunities for women resulted in significant gains for gender equality: the female‐male earnings ratio shot up from 0.30 in the mid‐1960s to 0.57 in 2002.
In terms of overall income inequality, though, gains for women have ended up widening rather than narrowing income differences. Since women’s incomes have been rising faster than men’s in recent decades, the shift from one‐ to two‐earner households might have reduced inequality by supplementing lagging male earnings with more rapidly increasing female earnings. But because of the prevalence of “assortative mating”— that is, the tendency of people to choose spouses with similar educational and socioeconomic backgrounds—the rise in dual income couples has actually exacerbated household income inequality. Between 1979 and 1996, the proportion of working‐age men with working wives rose by approximately 25 percent among those in the top fifth of the male earnings distribution, and their wives’ total earnings rose by over 100 percent. According to a 1999 estimate by Gary Burtless of the Brookings Institution, this unanticipated consequence of the women’s movement accounted for about 13 percent of the total rise in income inequality since 1979.
What about rapidly rising incomes at the top of the pay scale? Is there any evidence that changes in policy and social norms have contributed to the rise of “winner‐take‐all” or “superstar” markets?
Economic logic suggests that the general throttling of competitive pressures during the Treaty of Detroit era probably did suppress high‐end earnings by reducing competition among employers for the most talented executives and other key employees. After all, to the extent that trade barriers abroad and price and entry controls at home weakened the connection between firms’ productivity and their bottom line, the importance of attracting and keeping the best personnel was correspondingly diminished. At the same time, extremely high top marginal tax rates discouraged firms from bidding against each other for the most valuable employees, since even sizable raises would produce only paltry increases in actual take‐home pay. Accordingly, it makes sense that policy changes that unleashed competitive forces simultaneously unleashed demand for the most productive employees.
Meanwhile, the “Organization Man” ethos of the early postwar decades likely contributed to the dampening of the market for top talent. If stability and security were valued above short‐term rewards, and if loyalty to one’s employer was widely considered to be an important part of being a good person, then lateral moves from one company to another would have been discouraged—on both the supply and demand sides. As a result, prevailing mores may indeed have helped to keep a lid on top salaries. Furthermore, concerns about the effects on employee morale of lavish pay packages at the top were probably taken more seriously in a more group‐minded age. Consequently, there are good reasons for thinking that the cultural shift since the 1960s to a more individualistic ethos helped to heat up of the market for highly skilled workers.
Unfortunately, I am not aware of any studies that have attempted to quantify the effects of greater competition on high‐end incomes. Some data, however, are at least suggestive. Consider, first of all, the effect of free agency on sports salaries. For example, average salaries in Major League Baseball increased 0–2 percent in real terms during 1973–75; in 1976, when free agency was first instituted, salaries jumped 10 percent; in 1977, 38 percent; in 1978, 22 percent. Between 1974 and 1982, salaries as a share of team revenues skyrocketed from 17.6 percent to 41.1 percent. Arguably, the combination of increased product‐ market competition, declining top tax rates, and changing social norms has amounted to some kind of analog of free agency for elite managers and professionals—in which case one would expect to see an analogous boost to their earnings.
Along these lines, the recent study by Frydman and Saks of long‐term trends in executive compensation shows something extremely interesting. Their data indicate that the median real compensation of top executives was virtually flat from the end of World War II to the mid‐1970s, notwithstanding the postwar boom and related growth in firm size. From the 1980s forward, however, executive compensation and firm size grew at nearly the same rate.
These findings pose serious problems for those who argue that soaring executive pay reflects (in Frydman and Saks’ words) “managers’ ability to extract rents from the firm.” After all, they point out, pay remained basically flat from the 1950s to the ’70s “even though corporate governance was arguably weaker” during that period. Yet, similar problems confront those who argue now that the big run‐up in pay reflects (again, according to Frydman and Saks) “firms’ competition for scarce managerial talent … leading to higher compensation in larger firms.” If that argument is true, then why didn’t executive compensation rise with firm size in the early postwar decades?
The dramatic change in the trend line seems baffling—until one considers the possibility that changes in economic policies and social norms came together by the late 1970s to inaugurate an era of something like free agency for corporate executives. That explanation remains untested, admittedly, but its fit with the long‐run pay data is striking.
So what can we say about empirical support for the hypothesis that changing policies and norms have contributed to income inequality? Various studies of the effects of specific policies suggest a combination of modest to significant effects. No effort has been made, however, to assess the overall effect of the relevant changes—or the relative significance of that effect compared to the various structural explanations of increased inequality. More work thus remains to be done. For now, though, we can at least say that changes in policies and norms are definitely part of the story of increased income inequality.
As the above review of the historical record and economic literature attests, Paul Krugman and his fellow proponents of nostalgianomics deserve credit for calling attention to the role that changes in economic policies and social norms may have played in the rise of income inequality. They fail, however, to offer a full accounting of the relevant changes; instead, they have cherry‐picked particular policies and norms from the past that allow them to portray the early postwar decades as a model of enlightened social order. And Krugman compounds that failure by offering a completely wrongheaded explanation of how the relevant changes came about.
What did cause the sweeping changes in policy and social norms that made the economy more competitive and the culture more individualistic? According to Krugman, the rise of income inequality is due to the rise to political power of the modern conservative movement. Specifically, conservatives were able to exploit “white backlash” in the wake of the civil rights movement to hijack first the Republican Party and then the country as a whole. Once in power, they duped the public with “weapons of mass distraction” (i.e., social issues and foreign policy) while “cut[ting] taxes on the rich,” “try[ing] to shrink government benefits and undermine the welfare state,” and “empower[ ing] businesses to confront and, to a large extent, crush the union movement.”
Alas, Krugman’s account is a crude caricature of historical analysis. To be sure, the rise of the conservative movement has contributed in important ways to the policy and cultural shifts of recent decades. But the real story of those shifts is more complicated, and more interesting, than Krugman lets on. The fact is that influences from across the political spectrum have helped to shape the more competitive, more individualistic, and less equal society we now live in.
Indeed, the relevant changes in social norms were led by movements associated with the political left. The great civil rights campaigns of the 1950s and ’60s exposed the ugliness of traditional racial bigotry and provoked a widespread move toward more enlightened attitudes about race and ethnicity. One result of that racial progress was the elimination of national‐origin quotas in the Immigration and Nationality Act of 1965, a piece of legislation spearheaded by a young Senator Edward Kennedy. Meanwhile, the women’s movement of the 1960s and ’70s mounted a frontal assault on traditional notions about the sexual division of labor. And the counterculture of the 1960s, whose influence spread throughout American society in the “Me Decade” that followed, upended the social ethic of group minded solidarity and conformity with a stampede of unbridled individualism and self assertion. It seems likely that, with the general relaxation of inhibitions of all kinds, talented and ambitious people felt less inhibited about seeking top dollar in the marketplace. In that case, yippies and yuppies were simply two sides of the same coin.
As for changes in economic policies, they did happen and they were dramatic. But contrary to Krugman’s vast‐right‐wing‐conspiracy theory, liberals and Democrats joined with conservatives and Republicans in pushing for those changes. In addition to his role in liberalizing immigration, Edward Kennedy was a leader in pushing through both the Airline Deregulation Act of 1978 and the Motor Carrier Act of 1980 that deregulated the trucking industry—and he was warmly supported in both efforts by left‐wing activist Ralph Nader. President Jimmy Carter signed these two pieces of legislation, as well as the Natural Gas Policy Act of 1978 (which began the elimination of price controls on natural gas) and the Staggers Rail Act of 1980 (which deregulated the railroad industry).
The three most recent rounds of multilateral trade talks were all concluded by Democratic presidents: the Kennedy Round in 1967 by Lyndon Johnson; the Tokyo Round in 1979 by Jimmy Carter; and the Uruguay Round in 1994 by Bill Clinton. And although the slashing of the top income tax rate from 70 percent to 50 percent was one of President Ronald Reagan’s signature accomplishments, the Tax Reform Act of 1986, which pushed the top rate all the way down to 28 percent, was sponsored by two Democrats, Senator Bill Bradley and Rep. Richard Gephardt.
And what about unions? Krugman is right that policy changes contributed to their decline, but his ideological blinkers lead him to identify the wrong ones. The real culprit wasn’t conservative anti‐union bias in labor policy: the effect of Republican administrations on union fortunes has been minimal. What really mattered, instead, was the bipartisan effort to unwind restrictions on domestic and foreign competition in goods and services markets. In the new, more competitive environment, employment losses in unionized sectors made declining union membership all but inevitable.
Krugman’s conspiracy theory may offer an emotionally satisfying tale for ideological opponents of modern conservatism, but as an objective historical account it doesn’t even pass the laugh test. How then should we characterize the dramatic changes in economic policies and social norms over the past generation?
With all the appropriate caveats that should attend any sweeping historical generalization, I submit that these changes represent a broadly successful response to the challenges of social and economic development. To put the matter more plainly, they represent progress.
The move toward a more individualistic, less group‐minded, less tradition‐minded culture is not unique to the United States. As political scientist Ronald Inglehart has documented in dozens of countries around the world, this shift toward what he calls “postmodern” attitudes and values is the predictable cultural response to rising affluence and expanding choices. “In a major part of the world,” Inglehart writes, “the disciplined, self denying, and achievement‐oriented norms of industrial society are giving way to an increasingly broad latitude for individual choice of lifestyles and individual self‐expression.”
The increasing focus on individual fulfillment means, inevitably, less deference to tradition and organizations. “A major component of the Postmodern shift,” according to Inglehart, “is a shift away from both religious and bureaucratic authority, bringing declining emphasis on all kinds of authority. For deference to authority has high costs: the individual’s personal goals must be subordinated to those of a broader entity.”Paul Krugman may long for the return of self‐denying corporate executives who declined to seek better opportunities out of organizational loyalty, but they are creatures of a bygone ethos—an ethos that also included uncritical acceptance of racist and sexist traditions and often brutish intolerance of deviations from mainstream lifestyles and sensibilities.
What is the connection between economic growth and cultural individualism? In Inglehart’s view, the key is the freedom to shift one’s attention from physical survival and security to other needs and goals. “This shift in worldview and motivations,” he states, “springs from the fact that there is a fundamental difference between growing up with an awareness that survival is precarious, and growing up with the feeling that one’s survival can be taken for granted.” As Inglehart elaborates:
Individuals under high stress have a need for rigid, predictable rules. They need to be sure what is going to happen because they are in danger—their margin for error is slender and they need maximum predictability. Postmodernists embody the opposite outlook: raised under conditions of relative security, they can tolerate more ambiguity; they are less likely to need the security of absolute rigid rules.
The emergence of a more individualistic ethos thus represents a case of cultural adaptation to new social conditions. The advent of American mass prosperity in the years after World War II was a new social reality that called for the development of a new, corresponding set of social norms. That development occurred, however messily, in the cultural upheavals of the 1960s and ’70s.
Following quickly on their heels were the economic upheavals of the 1970s and ’80s: the oil shocks of 1973 and 1979; the “Great Inflation,” combined (to the consternation of the prevailing Keynesian macroeconomic consensus) with the recession of 1973–75; the abrupt slowdown in productivity growth; and the harsh, disinflating recession of 1980–82. These crises provided the political impetus for a reorientation of American political economy toward greater reliance on entrepreneurship and market competition—in other words, the shift from the Treaty of Detroit to the Washington Consensus. In particular, much of the sweeping economic deregulation during that period was justified at the time as a means of combating inflation. More generally, just as the Great Depression moved public opinion toward acceptance of greater government involvement in economic affairs, the stagflation of the 1970s created popular support for the idea that government intervention had now gone too far.
As happened in the 1930s, the swing in popular sentiment and political rhetoric was guided by an underlying shift in the intellectual climate—a shift that had been in the offing for many years. In the decades after the institutions of the Treaty of Detroit were hurriedly slapped together, advances in economics knocked major holes in many of the assumptions on which those institutions rested. Consider, for example, the work of three celebrated Nobel Prize–winning economists. Milton Friedman demonstrated that monetary policy failures, not the free market’s supposed tendency toward overproduction, lay at the root of the Great Depression. George Stigler’s studies of economic regulation revealed that “government failures” (in particular, the phenomenon of “regulatory capture”) regularly foiled government attempts to address real or imagined market failures. And F. A. Hayek’s insights illuminated the role of market prices in coordinating the use of knowledge dispersed throughout society—a function that, even with the best of intentions, government officials lack the capacity to carry out.
During their careers, these men (and the University of Chicago, with which all were affiliated) were figures of considerable controversy. But today their achievements are recognized throughout the economics profession. They led economists, and well‐informed opinion generally, to put a healthy respect for the wealth‐creating power of market competition, and a healthy wariness of government efforts to second‐guess markets, beyond serious intellectual dispute. “Milton Friedman … was the devil figure of my youth,” recalled Harvard economist Lawrence Summers, who served as Bill Clinton’s Treasury secretary and is now director of President Obama’s National Economic Council. “Only with time have I come to have large amounts of grudging respect. And with time, increasingly ungrudging respect.”
The change in American political economy, with its greater emphasis on competition and entrepreneurship, thus represented—in broad brush at least, if not in every particular— a distinct improvement in economic policy that reflected an improved understanding of economic affairs. Future Supreme Court Justice Stephen Breyer, then a professor at Harvard Law School (and formerly an aide to Senator Edward Kennedy who worked on airline deregulation), offered this intellectual obituary for the old order back in 1982:
The most persuasive general theory of regulation, popular among economists and political scientists until the late 1950s, held that regulation grew out of a need for a regulatory program to secure the “public interest” and that regulators sought, to the best of their ability, to secure the public interest as defined in their enabling statutes. This view has been discredited by historians and economists, who have argued that many forms of regulation, such as trucking or airline regulation, injure the general public.
To be sure, debate over the proper scope of the regulatory state still rages, especially at present in the wake of a serious financial crisis. Yet even now, there is no serious suggestion that we should return to the days of financial repression with controlled interest rates, branching restrictions, and a separation of commercial and investment banking. Even less conceivable would be a resumption of regulated airline or trucking or railroad rates, or across‐the‐board tariff hikes, or a re‐creation of the AT&T monopoly.
Although the defenders of old‐style economic regulation have all but vanished from the scene, a great deal of acrimonious wrangling over tax rates continues. Even here, however, interest in turning back the clock has its limits. The Bush tax cuts are certainly controversial, but nobody is seriously proposing that top marginal rates should go back to their stratospheric, pre‐Reagan levels of 70 or 90 percent. And that is because the central insight of supply‐side economics— that sufficiently high tax rates can have deleterious effects on incentives to work and invest—is now generally accepted. “Once you’re below the 40 percent range, people aren’t that sensitive,” observed the Harvard economist Lawrence Katz, who served in the Clinton administration. “And once you’re well above 50 percent people are sensitive.” It is telling that Katz’s first point is much more hotly debated than his second.
Let me turn now to an important but confused question: if the economic institutions of the Treaty of Detroit were really so flawed, how did they produce such great results? And, make no mistake, America’s economic performance during the early postwar decades was truly wonderful. Not only were incomes converging, but they were rising smartly across the board, thanks to sustained, vigorous growth in productivity. Between 1947 and 1973, output per worker in the nonfarm business sector rose at an average annual rate of 2.9 percent— compared to 2.0 percent between 1980 and 2006. Don’t those numbers show that the policies of the Treaty of Detroit were better than those of the Washington Consensus— not only from the standpoint of equity, but from the standpoint of efficiency as well?
This is Paul Krugman’s argument. “The Great Compression, far from destroying American prosperity,” he writes, “seems if anything to have invigorated the economy. If that tale runs counter to what textbook economics says should have happened, well, there’s something wrong with textbook economics.”
Krugman’s analysis here rests on a crude conflation of correlation and causation. It is true that, all thing being equal, we should expect better economic policies to generate better economic performance. But in the real world, all things are seldom equal; thus strong performance is not always reliable evidence of good policies.
For example, countries can achieve high growth for a time by unsustainable means— by borrowing heavily or inflating the currency, for example. Furthermore, relatively backward countries, notwithstanding seriously flawed institutions, can often grow faster than more advanced countries. Such “catch up growth” is possible because it is easier to adopt technologies or organizational techniques developed elsewhere than to come up with them on your own. Institutions that do not hinder catch‐up growth, or that are even conducive to it, can nonetheless become problematic as a country approaches the technological frontier. Meanwhile, for countries at that frontier, the specific challenges of economic growth change over time as a country moves through different stages of economic development. Institutions that may serve well enough at one stage can lead to difficulties later on. What works in the early stages of industrialization, for example, may work much less well in a postindustrial “knowledge economy.”
Untangling exactly why a country grew at a given rate during a given period of time is thus devilishly difficult. Not surprisingly, there is no generally accepted explanation for why the U.S. economy did so well in the early postwar decades. But several factors were especially conducive to strong performance at that time. There was a pent‐up demand for goods and services after the privations of the Great Depression and the mobilization of World War II. There was also a pent‐up supply of new products that couldn’t be brought to market during the depression and war years. That pent‐up supply was augmented by technological and organizational breakthroughs accelerated by the imperatives of total war. Big advances in transportation, communications, and air conditioning stimulated catch‐up growth in the underdeveloped South and under populated West. And rapid upgrades in human capital (first explosive growth in high school graduates, then explosive growth in college graduates) doubtless helped to spur productivity gains.
We will probably never have a fully satisfactory account of why the postwar decades were such banner years for the American economy. What we do have is very strong economic evidence that the elimination of barriers to competition beginning in the 1970s has, on the whole, been good for productivity and growth. Analysis of specific deregulation initiatives shows sizable welfare gains for consumers and productivity gains for producers;104 a voluminous literature documents the general connection between trade liberalization and faster economic growth as well as static efficiency gains; and a large body of research shows that financial liberalization promotes overall economic growth.
Let me focus here on one area where some economists have argued that restricting competition can actually be good for productivity. The area is labor markets, and the restrictions take the form of unionization and collective bargaining. Richard Freeman of Harvard, among others, has claimed that the collective voice provided by unions, along with their bargaining power, can create conditions that allow workers to do their jobs more effectively. Some research findings do support a positive connection between unions and productivity, but there are many contrary findings as well. Overall, as Barry Hirsch has concluded, “my assessment of existing evidence is that the average union effect is very close to zero, and as likely to be somewhat negative as somewhat positive.”
What is beyond serious debate is that unions in the United States reduce firms’ profitability. The hit to firms’ bottom lines doesn’t just come at the expense of monopoly profits in concentrated industries; in addition, it reflects a union “tax” on returns from long term investments. As a result, and as the evidence clearly shows, the effect of unions is to reduce firms’ research and development, other investment, and employment growth.
All of which sets up a nasty dilemma. If organized firms dominate a domestic industry and are relatively immune from foreign competition, then unionization imposes a significant drag on long‐term economic dynamism. If, on the other hand, organized firms face real competition from nonunionized firms at home and from foreign firms, union density will suffer an inevitable and ongoing decline over time. The story of the rise and fall of private sector unionism in this country is the story of how this dilemma was ultimately resolved: in favor of the country’s overall economic health, and against heavy unionization of industry. Perhaps, in the great wave of union organizing during the 1930s and ’40s, firms’ acquiescence in unionization was the profit‐ and productivity‐ maximizing move, at least in the short term. In the face of a highly aggressive labor movement stirred to militancy by the Great Depression, buying labor peace with above market wages and restrictive work rules might well have been cheaper than protracted strikes and ongoing workplace acrimony. Over time, though, the burdens of the union “tax” led unionized firms to open plants in parts of the country less hospitable to union organizing— and led to the emergence of new firms in those parts of the country. Concurrently, the gradual rise of competition from Europe, Japan, and later less developed countries in the decades since World War II—a phenomenon abetted by the ongoing fall of U.S. trade barriers— proved devastating for less competitive unionized firms. Private‐sector unionism has thus been a victim of economic progress.
Paul Krugman is a brilliant economist who has made important contributions to his field. He has won, deservedly, the highest awards his profession can bestow. Yet in interpreting America’s economic and social history since World War II, he has allowed his ideological commitments to cloud his judgment. In lieu of objective analysis, he offers rosy‐tinged nostalgia.
Krugman looks back to the America of his boyhood and sees a society that combined energetic, activist government management of economic affairs with vigorous growth and converging incomes. Entranced by that vista, he imagines a Golden Age—not just of economic performance, but of economic policies and social norms as well. “During the thirties and forties,” he writes, “liberals managed to achieve a remarkable reduction in income inequality, with almost entirely positive effects on the economy as a whole. The men and women behind that achievement offer today’s liberals an object lesson in the difference leadership can make.”
The actual historical record is not nearly so neat and tidy. It is true that the quarter‐century or so after World War II was a period of glittering economic growth and widely shared prosperity. It is also true that the distinctive political economy of that period contributed at least to some extent to the concurrent narrowing of the income distribution—and that this political economy was reinforced by a set of social norms now out of fashion.
But when that distinctive political economy is reviewed comprehensively and dispassionately, the conclusion that its effects were “almost entirely positive” becomes impossible to sustain. On the contrary, what made the Treaty of Detroit distinctive was its pervasive restrictions on competition in product, capital, and labor markets. Consider high trade barriers, farm price supports and production limits, price and entry regulations in transportation and energy, interest rate caps and branching restrictions, national‐origin immigration quotas—does Paul Krugman really wish to defend such things? Yet all of those policies—along with the labor laws, high minimum wage, and progressive tax rates that Krugman does celebrate—were constituent elements of the postwar economic order. All shared the same bias in favor of producer welfare at the expense of consumer welfare. And it was that pro‐producer, anti‐competition bias that served to promote income convergence: first, by limiting competition among less killed workers; and second, by limiting competition for the highest‐skilled workers.
Meanwhile, the social norms that reinforced the Treaty of Detroit were hardly ones that a 21st‐century progressive would be expected to endorse. The social ethic that put downward pressure on the paychecks of corporate executives also promoted conformism and groupthink. And it went hand in hand with the traditional racism and sexism that, in their own ways, supported the Great Compression. The social ethic fostered in group solidarity, which Krugman lauds in the corporate context. But that context was by no means the only one in which in‐group solidarity among white males used to prevail. Then came the convulsions of the 1960s, which ushered in a new, more individualistic ethos—with dramatically different attitudes regarding race, sex, sexual orientation, the permissible scope of cultural expression, the role of religion in public life, the nature of American national identity, respect for authority generally … and, let’s not forget, loyalty to one’s boss and the seemliness of fat paychecks.
Here, stripped of nostalgia, is the actual story of the connection between income trends and changes in political economy and social norms. Economic policies were altered to give wider scope to entrepreneurship and competition, in accordance with advances in economics. Social norms shifted away from an emphasis on loyalty to the group and toward a greater emphasis on personal fulfillment (including through elective membership in groups of our own choosing). The upshot of these changes was to add to the increase in income inequality over the past generation through the following channels:
• Increasing the supply of less‐skilled workers (through increased immigration)
• Decreasing the relative demand for less skilled workers (through increased imports of labor‐intensive products)
• Decreasing the ability of less‐skilled workers to command above‐market wages through minimum wage laws
• Decreasing wage compression through collective bargaining
• Increasing competition among employers for the most‐skilled workers (because of a stronger connection between firm productivity and firm performance, and reduced barriers to moving from firm to firm)
• Amplifying underlying income trends through assortative mating and the growing prevalence of two‐earner households
There is no morality tale here. Economic institutions improved, and social norms improved, but as a result incomes diverged.
The rise in income inequality does raise issues of legitimate public concern. And reasonable people disagree hotly about what can and ought to be done to ensure that America’s prosperity is more widely shared. But the caricature of postwar history put forward by Krugman and other purveyors of nostalgianomics won’t lead us anywhere. Reactionary fantasies about the good old days never do.
This essay was originally published by Cato in February 2009
References that appear in the original have been removed