Joseph Stiglitz
Joseph Stiglitz; drawing by David Levine


For our purposes, the “fabulous Nineties” can be bracketed by two major political events. The fall of the Berlin Wall on November 9, 1989, marked the end of the cold war and the beginning of a period of geopolitical optimism. The Soviet empire disintegrated, the great powers unified Germany, and Europe began its movement to a common currency. The end of the period came on September 11, 2001, when the age of terrorism began, and the security anxieties of the cold war were replaced by altogether different ones.

Economic history seldom has the dramatic discontinuities of political events, and changes in economic fortunes often have complex roots. Good economic history requires not only a sound understanding of how the economy functions but also attention to institutional details, such as how monetary and fiscal policy decisions are made, as well as the impact of events such as the attacks of September 11, 2001, or the economic impact of the war in Iraq and its aftermath.

When we interpret the Nineties, it is natural for us to rely on histories written by economists who witnessed the events firsthand. Two books that will serve as excellent sources for this period are The Fabulous Decade, a careful analysis of national economic policy first published in 2001 by Alan Blinder and Janet Yellen, and The Roaring Nineties, a comprehensive review of the last decade by Joseph Stiglitz.1

All three authors are distinguished academic economists who worked in different government agencies during the 1990s. Alan Blinder served in President Clinton’s Council of Economic Advisers (CEA) in 1993 and 1994; he then became a governor and vice-chairman of the Federal Reserve from 1994 until 1996, when he returned to Princeton University. Janet Yellen was a governor of the Federal Reserve from 1994 until 1997, and she then served as chair of the CEA from 1997 to 1999; she is now at the University of California at Berkeley.

The book by Blinder and Yellen has a relatively narrow focus. It seeks to explain why the 1990s were so successful from a macroeconomic point of view—one that takes into account the behavior of the major economic aggregates such as output, unemployment, and productivity. Their analysis proceeds by considering economic statistics for the period; by using two different econometric models to assess the effects of economic shocks during the 1990s on both consumption and investment; and by drawing on detailed transcripts of Federal Reserve meetings.

Most economists at the CEA and the Federal Reserve are highly qualified, but Joseph Stiglitz is unique in being the only Nobel Prize winner who was also the chief economist to the President of the United States, whom he served from 1995 to 1997. Stiglitz then served as chief economist of the World Bank between 1997 and 2000. His academic writings have emphasized how “asymmetric information”—situations where one side of a transaction has better information than the other—distorts market signals and produces market failures.

Stiglitz’s book is part economic analysis, part memoir, and part social commentary. Whereas Blinder and Yellen celebrate the economic successes of the 1990s, Stiglitz is much more ambivalent about the accomplishments of the period. He has more complaints about than compliments for the economic policies of the 1990s and holds that the Clinton administration failed to live by its principles:

Of all the mistakes we made in the Roaring Nineties, the worst were caused by a lack of standing by our principles and lack of vision…. Why did we fail to follow through on our principles?… We were… I think, in part a victim of our own seeming success. At the beginning of the administration, the bold, broad-gauged agenda to address America’s problems was put aside in favor of a single-minded focus on deficit reduction.

The Clinton administration’s concentration on reducing deficits was part of what Stiglitz feels was the misguided “ascendancy of finance.” In his view, the Clinton officials were excessively concerned about the performance of the stock and bond markets because investment banker Robert Rubin headed the Treasury Department. Stiglitz is not the only person to have made that observation. James Carville, President Clinton’s first campaign manager, quipped a few years ago, “I used to think if there was reincarnation, I wanted to come back as the President or the Pope or a .400 baseball hitter. But now I want to come back as the bond market. You can intimidate everyone.”

In fact, the scandals of finance are a good example of the asymmetric-information syndrome whose study Stiglitz pioneered. This syndrome was pandemic in the late 1990s when analysts like Salomon Smith Barney’s Jack Grubman touted stocks that were hardly viable while, behind the scenes, his firm raked in billions in underwriting fees. Other examples were corporate managers of Enron, WorldCom, and other companies; they knew much of the truth about their companies’ finances, while the public and even sophisticated analysts did not. This asymmetric information allowed a handful of insiders to enrich themselves while defrauding shareholders. According to a study by the Financial Times, corporate insiders from the top twenty-five bankrupt companies took $3.3 billion in stock sales, bonuses, and other compensation even as their firms were spiraling into insolvency.2


Indeed, the ability of corporate managers to hide financial information about their companies from those who own the companies—the shareholders—is a truly bizarre situation for which there is little justification. Corporations keep multiple sets of accounts—one set for published financial accounts, a second and unpublished set of books for tax purposes, and in many cases a third set of books for the managers themselves. In essence, corporations get to choose the yardstick by which they measure the profits that they publish. Under current law they have no obligation to reveal tax returns, and stockholders have no right to obtain them. One important reform that would illuminate the true state of corporate finances would be to require corporations to publish their tax returns. This would allow investors to assess profits by a standard yardstick.

The reader should be warned that, as Stiglitz states, his “is not a book of investigative reporting.” In his passion for revisiting the history of the Nineties, Stiglitz occasionally stretches the interpretation of events, and his arguments are sometimes overstated or inaccurate. One example is a tendency to exaggerate the extent to which Keynesian theories are accepted doctrine.3 Some schools of economics hold that Keynesian views—for example, on the role of fiscal and monetary policies in affecting real output and unemployment—are dead wrong, or even dead. Skeptics of Keynesian views include those who lean toward the “real business cycle” approach, which holds that technological shocks rather than shocks to spending produce business cycles. While I would argue that these critics of modern Keynesian economics have an unconvincing case, it is a major mistake to assume that they do not exist.


The economic turning points of the Nineties are less dramatic than the political events—the fall of the Berlin Wall and September 11—that bracketed the decade. For simplicity, I will use the Clinton years between 1993 and 2000 as the primary period of analysis.

Table 1 shows, in the top panel, the four major indicators of the business cycle. These factors show the short-run movements of the economy and indicate the extent to which the economy is living up to its economic potential and making use of its labor and capital resources. The bottom panel shows the major “structural indicators”—the underlying forces that determine the long-run strength and growth prospects of the economy, as well as its fiscal health and its ability to improve the living standards of the population over the longer term.

One of the major contributors to prosperity in the 1990s was structural—the peace dividend made possible by the end of the cold war. Defense spending as a share of gross domestic product (GDP) declined by 2.6 percent, which is equivalent to $280 billion at today’s income level. This bonanza allowed private consumption to grow rapidly and helped to balance the nation’s books.

During the supply-side years under the Reagan administration, there were persistent federal budget deficits, averaging 3.5 percent of GDP. The first major steps toward reducing the deficit were taken by the first Bush administration in 1990 in the form of a small tax increase, and by congressional legislation that limited spending increases and tax cuts. But the Clinton administration pushed through the most visible fiscal correction in 1993, with a major and narrowly won program of tax cuts and spending reductions. The turnaround in the budget was surprising and dramatic. From a peak deficit of 5 percent of GDP in late 1992, the budget actually went into a surplus of 2 percent of GDP in 2000.

A third structural factor, and the most important for long-run economic growth and for the increase in wages and living standards, concerns growth in productivity—or output per hour worked. While the supply-side years are often touted as the “unshackling of American capitalism,” the 1980s in fact had a miserable productivity growth of only 1.8 percent per year. This rate rose sharply in the “fabulous” Nineties, and, much to the surprise of many, has continued to increase during the last three years.

While these structural indicators form the backdrop of long-run economic performance, the business cycle makes the headlines and drives election results. Table 1 shows four major factors that are central to the business cycle: the inflation rate, the rate of growth of real GDP, the unemployment rate, and the rate of growth of employment. The striking feature of the Nineties was that each of the major cyclical indicators improved, often sharply, as compared to the supply-side period. For example, real GDP growth rose from 2.8 percent per year to 3.7 percent per year, and employment growth rose from 1.6 percent per year to 2.4 percent per year.


These changes in percentage points may seem trivially small. In reality, because of their compounding effects, small differences in the rate of growth make big differences in the actual levels of income and output. The small improvement during the 1990s of the annual growth rate of employment from 1.6 to 2.4 percent would translate into a higher growth of employment, totaling eight million jobs. Similarly, the higher growth of real output during the 1990s implies that total output at the end of the period was about $700 billion per year higher, which amounts to about $7,000 per American household per year.

The sources of the economic boom of the 1990s are the major topic of the study by Blinder and Yellen. Why, they ask, did the recovery last so long? Why did unemployment decline so much? What kept inflation in check?

The major difficulty in answering such questions lies in the economy’s complex and evolving structure. How do the short-term interest rates set by the Federal Reserve affect other interest rates? How do interest rates affect asset prices, such as stock prices, housing prices, and the foreign exchange rate of the dollar? What are the effects of changes in interest rates, exchange rates, and wealth on the spending of consumers, on business investment, and on trade with other countries?

To help policymakers sort through the different ways that policies or external events affect the economy, economists rely upon econometric models. These are sets of equations, based on historical data, that are capable not only of simulating actual economic history but also of projecting “counterfactual” histories, or what would have happened if policies or events had been different.

Blinder and Yellen use two large econometric models to investigate the events of the 1990s. Their basic finding is that a series of unexpected and favorable developments were responsible for the extraordinary performance. Among the favorable developments were declining health care costs, a fall in oil prices, a rise in the exchange rate of the dollar, and changes in the way the consumer price index (CPI) was measured. The most important single development was the upturn in productivity growth. This directly lowered production cost and inflation, led to higher growth in real (or inflation-adjusted) wages, and reduced the demand of workers for higher money wages.

Taken together, these factors allowed the Federal Reserve to keep interest rates low so that consumption and investment could continue to expand through 2000. According to the two econometric models used by Blinder and Yellen, these five factors reduced inflation by between 2 and 5 percentage points between 1995 and 1999. In addition, by 1999, the unemployment rate was 11/2 percentage points lower than it would have been without the favorable shocks.

A central question for the future raised by Blinder and Yellen is whether the economic record of the late 1990s should be viewed as a structural shift in the economy or as a series of favorable but unique events. Some enthusiasts of the high-technology economy of the late 1990s argued that the old tradeoffs (such as the tendency of low unemployment to cause rising inflation) no longer applied in an interconnected global economy. It is worth quoting Blinder and Yellen’s conclusion on this point:

The extraordinary combination of low inflation and low unemployment that we have enjoyed in recent years should be mostly transitory. As workers come to realize that productivity is rising faster, they will demand more generous real wage increases. As firms begin to grant these wage increases, their costs will rise. On this view, the short-run Phillips curve tradeoff [between inflation and unemployment] should return to normal as perceptions catch up to reality. In theory, and in both of the simulation models, the “bliss” is only temporary.4

In other words, because the factors that kept inflation and unemployment low in the 1990s (health care costs, oil prices, and the value of the dollar) are unlikely to persist, the 1990s combination of low unemployment and low inflation cannot be sustained in the years to come.

The history recounted by Blinder and Yellen leaves fiscal policy—the levels of government spending and taxes—largely out of the picture. Surely, you might think, the fiscal policies of the 1990s, which emphasized deficit reduction, should have had some effect on the overall economy. Paradoxically, mainstream economics holds that the austerity regime of the Clinton years should have produced a contracting, not an expanding, economy. The answer to this puzzle is that the fiscal contraction was offset by monetary expansion: the Federal Reserve kept interest rates at historically low levels through the 1990s. Low interest rates encouraged business investment and home-building, so that between 1992 and 2000 investment rose by 4 percent of GDP. The fiscal tightening reduced government purchases by over 2 percent of GDP over the same period. This combination, long advocated by pro-growth economists, is known as a change in the “fiscal-monetary mix.” Here is one of the major lessons of the 1990s: well-designed monetary and fiscal policies can increase investment and long-run economic growth without increasing unemployment or inflation.


Most people associate the Nineties with the breathtaking rise of the stock market. The Dow Jones Industrial Average rose from 2,588 in January 1991 to a monthly peak of 11,302 in January 2000. The NASDAQ index, with many hi-tech companies, rose from 414 in January 1991 to a peak of 5,250 in March 2000, for an average increase of 32 percent per year. The increase in the major US stock-price indexes in the 1990s was larger than during any ten-year period in the historical record.

Since early 2000, stock prices have fallen sharply from their peaks, although there has been some recovery over the last six months. The decline in the broad-based Standard and Poor’s 500 index over the last three years has been the largest since World War II. The total loss in market value is around $6 trillion since early 2000.

One of the major questions surrounding the 1990s is why the stock market rose so high and then crashed. The prices of many of the so-called “new economy” stocks, which were driving the financial bubble, had little

or no correlation to actual earnings.5 For example, shares of, the on-line bookstore, experienced exponential growth years before the company had ever produced a cent of profit.

Economists tend to divide into two camps to explain this phenomenon. The first group, espousing what is known as the efficient-market hypothesis, holds that market prices reflect the rational expectations people have about future earnings, dividends, and interest rates. This group holds that changes in economic fundamentals in the late 1990s were so dramatic that it was reasonable to believe that earnings and profit growth would justify a continued rapid growth in stock prices. The high point of this approach was a 1999 book by James K. Glassman and Kevin A. Hassett, called Dow 36,000, which forecast a further tripling of stock prices:

The stock market is a money machine: Put dollars in at one end, get those dollars back and more at the other end…. The Dow should rise to 36,000 immediately, but to be realistic, we believe the rise will take some time, perhaps three to five years [i.e., 2002 to 2004].6

One critic remarked that there was nothing wrong with the Glassman-Hassett argument that couldn’t be salvaged by dividing all the predicted numbers by ten. (In fact, division by four would reflect the reality as of late 2003.)

The sharp decline in stock prices during the last three years is seen as a vindication of the second explanation for the 1990s, which is sometimes called behavioral finance. This approach broadens the analysis of stock prices to take account of the psychology and sociology of investor behavior. One of the pioneers in this field has been the Yale economist Robert Shiller. His popular book Irrational Exuberance attracted headlines in early 2000 with a bold and prescient prediction:

Taken as a whole, [current studies] suggest that the present stock market displays the classic features of a speculative bubble: a situation in which temporarily high prices are sustained largely by investors’ enthusiasm rather than by consistent estimation of real value. Under these conditions,… the outlook for the stock market into the next ten or twenty years is likely to be rather poor—and perhaps even dangerous.7

One source of investor enthusiasm in the 1990s was undoubtedly the “new economy,” consisting of computer, software, telecommunications, and Internet firms, which rose from nowhere to capture almost one third of stock market values by early 2000. Many billions of dollars of market value were based on little more than a few college-dropout computer programmers, a bright-sounding business plan, and “.com” tacked onto the company name. The rise and fall of many such companies is well chronicled in John Cassidy’s wryly titled book republished in the past year, Dot.con.8 A staff writer for The New Yorker, Cassidy captures the enthusiasms of the fabulous 1990s. His view of this period, like Shiller’s, is that it was a classic financial bubble.

Stiglitz devotes a substantial part of his book to the financial markets of the 1990s, with chapters on the stock market bubble, accounting frauds, Enron, and the banking sector. His explanation, however, puts less emphasis on investor exuberance than on the deregulation of many parts of the economy, including telecommunications, utilities, and banking, that encouraged dysfunctional finance:

In the past three decades, the world has seen close to a hundred crises and many of them were brought on by some form of too-rapid deregulation. Though the economic downturn in 2001 is only a milder form of these more virulent diseases, there is no question that major parts of the downturn resulted from the deregulation of the Nineties.

Stiglitz points to several examples of deregulation, or poorly designed regulations, that first inflated the bubble of the 1990s and eventually produced its collapse. One important example was the 1999 repeal of the Depression-era Glass-Steagal Act, which had prohibited commercial banks from engaging in the investment-banking business. When they were allowed to do so, according to Stiglitz, the result was poor analysis and misguided investors. A second example came when, under political pressures, government regulators backed down from a regulation that would have required more accurate accounting for stock options.9 Stiglitz also attacks the Clinton administration for acquiescing to a cut in the capital gains tax in 1997. Whatever the merits of these policies, it is implausible that they were major contributors to the bubble of the 1990s. The financial mania may have been encouraged by poor regulation and lax corporate governance, but the impacts of accounting and tax changes were probably responsible for no more than a small part of it.

One important puzzle about this period is how the new economy of improved technology and computerization could have played such a central part in the rebound of the productivity growth of the 1990s while many new economy companies—selling toys, groceries, greeting cards, cartoons, legal advice, and auctioning airline and hotel reservations—proved to be such catastrophically poor investments. The performance of the new economy is best seen in the companies concerned with information technology and industrial machinery (which include computers, semiconductors, and related equipment). Here the growth in output per hour worked averaged 15 percent per year between 1995 and 2001. This rapid growth is responsible for at least half of the productivity upsurge in the last decade.

In view of the enormous productivity growth in these sectors, does it not seem plausible that those who invent these new products and services or put them on the market should be able to capture some of the profits from them? If GDP is $1 trillion larger in 2003 because of the new economy (which is a pretty good guess), shouldn’t some of the higher income go to entrepreneurs in that sector? One might think so, but it did not happen. The underlying mistake here is what I call “the alchemist fallacy.” I don’t mean the obvious fallacy that a miraculous process could transmute base metals into gold. Rather, the alchemist fallacy was to think that once a process for producing gold from lead was discovered, gold would retain its scarcity, and the discoverers would be rich beyond belief.

During the 1990s, people thought modern-day alchemy had been discovered in the transformation of electrons, optical fibers, and secret programming codes into market value. But innovation does not automatically lead to profits. Economic history teaches us that when a wondrous new product is invented, its price generally falls as other firms enter and imitate the product, rapidly eating away at the profits. In fact, with a few exceptions, companies producing innovations on average earn no more than a normal return on their investments. Virtually anyone could, and apparently did, set up a dot-com making use of the Internet, and the market was flooded with electronic greeting cards, on-line grocery shopping, free Internet service, and similar companies that were losing money in order to make sales. After the gold rush ended in 2000, according to recent data, new economy firms appear to have earned lower profits than those in the rest of the economy.


The end to the Nineties did not arrive with the shock of the September 11 terrorist strikes. While it is too early to write the history of this period, some initial observations are evident. Four factors gradually converged in 2000 and 2001 to bring the expansion to an end. One traditional factor was the tightening of interest rates by the Federal Reserve starting in early 1999. The Federal Reserve views its role as a stern parent who “takes away the punch bowl just when the party gets really good.” There was little doubt that the party was going full-tilt by 1999. The Federal Reserve then raised short-term interest rates by almost two percentage points between early 1999 and late 2000. A second factor was the bursting of the stock market bubble, which retarded business investment and personal consumption and thereby slowed overall spending.

A third factor, which turned out to be smaller than many contemporaneous observers feared, was the terrorist attacks of late 2001; these appear to have had only a transient effect on the business cycle, primarily diverting spending from travel to other sectors. A final factor beginning in mid-2002 and continuing through spring of 2003 was concerns over war in Iraq, which raised oil prices and slowed both investment and consumer spending.

The poor economic performance of the new millennium is shown in the last column of Table 1. Real GDP growth since 2000 has been 11/2 percentage points per year below that of the fabulous Nineties. The slow output growth and rapid productivity growth have been responsible for the declining employment over the last three years.

Stiglitz argues that these recent developments can be linked to decisions made in the 1990s. Referring to the Clinton administration, he writes, “We had also planted some of the seeds of destruction that would underlay the recession that arrived in March 2001.” The self-flagellation is poorly aimed. The growth of the 1990s was spectacular. It could not have been sustained much longer—there was little further room for the economy to grow. Stiglitz’s account of how policy failures caused recession is hampered by his lack of reference to the quantitative models of the kind that Blinder and Yellen have provided. In the end, the Blinder-Yellen view—that the string of good luck basically ran out in 2001—is a more persuasive account of recent history.

In his last chapter, Stiglitz is highly critical of the stewardship of the current administration. His criticism here is if anything too gentle. The budgetary surpluses that took a decade to achieve were undermined virtually overnight. The Bush administration pushed through large tax cuts, greatly increased military expenditures, embarked on its open-ended commitment in Iraq, and most recently persuaded Congress to pass an expensive deficit-financed expansion of Medicare. Since 2000, the federal budget has moved toward the red by more than $700 billion. Next year’s deficit is projected to be more than half a trillion dollars. Except for during the two world wars, the fiscal reversal amounts to the largest four-year deterioration in the federal budget in American history.

The budgetary backsliding is in fact a symptom of a profound policy shift toward selfish and short-sighted policies. Since the present administration took office, the foreign indebtedness of the United States has increased by $1.4 trillion. The Bush administration has systematically undermined virtually every international institution in sight—from the World Trade Court and the Kyoto Protocol to the World Trade Organization and the Anti-Ballistic Missile Treaty. Even as defense spending grows, the overall commitments of the US military appear unsustainable, according to a recent analysis of the Congressional Budget Office, which stated, “CBO’s analysis indicates that the active Army would be unable to sustain an occupation force of the present size beyond about March 2004….”10 Perhaps most distressing is the precipitous drop in America’s regard in world opinion—in virtually every country around the world, ally or neutral or antagonist.11 In all these matters, decades of cooperative policies have been reversed in two years.

The fabulous Nineties—with soaring stock market, falling unemployment, declining defense spending, budget surpluses, and bubbly optimism—are history. Moreover, as long as market participants remember how they lost $6 trillion on absurd and wildly overvalued speculations, a similar exuberance is unlikely to recur in the near future. More likely is an economy in which large federal budget deficits lead to cuts in existing civilian programs and doom critical priorities such as comprehensive health care. The long-term management of our economy has fallen prey to the short-term maximization of votes in which the planning cycle of the US administration extends no further than November 2004. For all these consequences, surely, the faults lie in misguided policies of the Bush administration and not in the stars.

This Issue

January 15, 2004