• Email
  • Single Page
  • Print

The Health Care Crisis and What to Do About It

The good news is that we know more about the economics of health care than we did when Clinton tried and failed to remake the system. There’s now a large body of evidence on what works and what doesn’t work in health care, and it’s not hard to see how to make dramatic improvements in US practice. As we’ll see, the evidence clearly shows that the key problem with the US health care system is its fragmentation. A history of failed attempts to introduce universal health insurance has left us with a system in which the government pays directly or indirectly for more than half of the nation’s health care, but the actual delivery both of insurance and of care is undertaken by a crazy quilt of private insurers, for-profit hospitals, and other players who add cost without adding value. A Canadian-style single-payer system, in which the government directly provides insurance, would almost surely be both cheaper and more effective than what we now have. And we could do even better if we learned from “integrated” systems, like the Veterans Administration, that directly provide some health care as well as medical insurance.

The bad news is that Washington currently seems incapable of accepting what the evidence on health care says. In particular, the Bush administration is under the influence of both industry lobbyists, especially those representing the drug companies, and a free-market ideology that is wholly inappropriate to health care issues. As a result, it seems determined to pursue policies that will increase the fragmentation of our system and swell the ranks of the uninsured.

Before we talk about reform, however, let’s talk about the current state of the US health care system. Let us begin by asking a seemingly naive question: What’s wrong with spending ever more on health care?

1.

Is health care spending a problem?

In 1960 the United States spent only 5.2 percent of GDP on health care. By 2004 that number had risen to 16 percent. At this point America spends more on health care than it does on food. But what’s wrong with that?

The starting point for any discussion of rising health care costs has to be the realization that these rising costs are, in an important sense, a sign of progress. Here’s how the Congressional Budget Office puts it, in the latest edition of its annual publication The Long-Term Budget Outlook:

Growth in health care spending has outstripped economic growth regardless of the source of its funding…. The major factor associated with that growth has been the development and increasing use of new medical technology…. In the health care field, unlike in many sectors of the economy, technological advances have generally raised costs rather than lowered them.

Notice the three points in that quote. First, health care spending is rising rapidly “regardless of the source of its funding.” Translation: although much health care is paid for by the government, this isn’t a simple case of runaway government spending, because private spending is rising at a comparably fast clip. “Comparing common benefits,” says the Kaiser Family Foundation,

changes in Medicare spending in the last three decades has largely tracked the growth rate in private health insurance premiums. Typically, Medicare increases have been lower than those of private health insurance.

Second, “new medical technology” is the major factor in rising spending: we spend more on medicine because there’s more that medicine can do. Third, in medical care, “technological advances have generally raised costs rather than lowered them”: although new technology surely produces cost savings in medicine, as elsewhere, the additional spending that takes place as a result of the expansion of medical possibilities outweighs those savings.

So far, this sounds like a happy story. We’ve found new ways to help people, and are spending more to take advantage of the opportunity. Why not view rising medical spending, like rising spending on, say, home entertainment systems, simply as a rational response to expanded choice? We would suggest two answers.

The first is that the US health care system is extremely inefficient, and this inefficiency becomes more costly as the health care sector becomes a larger fraction of the economy. Suppose, for example, that we believe that 30 percent of US health care spending is wasted, and always has been. In 1960, when health care was only 5.2 percent of GDP, that meant waste equal to only 1.5 percent of GDP. Now that the share of health care in the economy has more than tripled, so has the waste.

This inefficiency is a bad thing in itself. What makes it literally fatal to thousands of Americans each year is that the inefficiency of our health care system exacerbates a second problem: our health care system often makes irrational choices, and rising costs exacerbate those irrationalities. Specifically, American health care tends to divide the population into insiders and outsiders. Insiders, who have good insurance, receive everything modern medicine can provide, no matter how expensive. Outsiders, who have poor insurance or none at all, receive very little. To take just one example, one study found that among Americans diagnosed with colorectal cancer, those without insurance were 70 percent more likely than those with insurance to die over the next three years.

In response to new medical technology, the system spends even more on insiders. But it compensates for higher spending on insiders, in part, by consigning more people to outsider status—robbing Peter of basic care in order to pay for Paul’s state-of-the-art treatment. Thus we have the cruel paradox that medical progress is bad for many Americans’ health.

This description of our health care problems may sound abstract. But we can make it concrete by looking at the crisis now afflicting employer-based health insurance.

2.

The unraveling of employer-based insurance

In 2003 only 16 percent of health care spending consisted of out-of-pocket expenditures by consumers. The rest was paid for by insurance, public or private. As we’ll see, this heavy reliance on insurance disturbs some economists, who believe that doctors and patients fail to make rational decisions about spending because third parties bear the costs of medical treatment. But it’s no use wishing that health care were sold like ordinary consumer goods, with individuals paying out of pocket for what they need. By its very nature, most health spending must be covered by insurance.

The reason is simple: in any given year, most people have small medical bills, while a few people have very large bills. In 2003, health spending roughly followed the “80–20 rule”: 20 percent of the population accounted for 80 percent of expenses. Half the population had virtually no medical expenses; a mere 1 percent of the population accounted for 22 percent of expenses.

Here’s how Henry Aaron and his coauthors summarize the implication of these numbers in their book Can We Say No?: “Most health costs are incurred by a small proportion of the population whose expenses greatly exceed plausible limits on out-of-pocket spending.” In other words, if people had to pay for medical care the way they pay for groceries, they would have to forego most of what modern medicine has to offer, because they would quickly run out of funds in the face of medical emergencies.

So the only way modern medical care can be made available to anyone other than the very rich is through health insurance. Yet it’s very difficult for the private sector to provide such insurance, because health insurance suffers from a particularly acute case of a well-known economic problem known as adverse selection. Here’s how it works: imagine an insurer who offered policies to anyone, with the annual premium set to cover the average person’s health care expenses, plus the administrative costs of running the insurance company. Who would sign up? The answer, unfortunately, is that the insurer’s customers wouldn’t be a representative sample of the population. Healthy people, with little reason to expect high medical bills, would probably shun policies priced to reflect the average person’s health costs. On the other hand, unhealthy people would find the policies very attractive.

You can see where this is going. The insurance company would quickly find that because its clientele was tilted toward those with high medical costs, its actual costs per customer were much higher than those of the average member of the population. So it would have to raise premiums to cover those higher costs. However, this would disproportionately drive off its healthier customers, leaving it with an even less healthy customer base, requiring a further rise in premiums, and so on.

Insurance companies deal with these problems, to some extent, by carefully screening applicants to identify those with a high risk of needing expensive treatment, and either rejecting such applicants or charging them higher premiums. But such screening is itself expensive. Furthermore, it tends to screen out exactly those who most need insurance.

Most advanced countries have dealt with the defects of private health insurance in a straightforward way, by making health insurance a government service. Through Medicare, the United States has in effect done the same thing for its seniors. We also have Medicaid, a means-tested program that provides health insurance to many of the poor and near poor. But nonelderly, nonpoor Americans are on their own. In practice, only a tiny fraction of nonelderly Americans (5.3 percent in 2003) buy private insurance for themselves. The rest of those not covered by Medicare or Medicaid get insurance, if at all, through their employers.

Employer-based insurance is a peculiarly American institution. As Julius Richmond and Rashi Fein tell us in The Health Care Mess, the dominant role of such insurance is the result of historical accident rather than deliberate policy. World War II caused a labor shortage, but employers were subject to controls that prevented them from attracting workers by offering higher wages. Health benefits, however, weren’t controlled, and so became a way for employers to compete for workers. Once employers began offering medical benefits, they also realized that it was a form of compensation workers valued highly because it protected them from risk. Moreover, the tax law favored employer-based insurance, because employers’ contributions weren’t considered part of workers’ taxable income. Today, the value of the tax subsidy for employer-based insurance is estimated at around $150 billion a year.

Employer-based insurance has historically offered a partial solution to the problem of adverse selection. In principle, adverse selection can still occur even if health insurance comes with a job rather than as a stand-alone policy. This would occur if workers with health problems flocked to companies that offered health insurance, while healthy workers took jobs at companies that didn’t offer insurance and offered higher wages instead. But until recently health insurance was a sufficiently small consideration in job choice that large corporations offering good health benefits, like General Motors, could safely assume that the health status of their employees was representative of the population at large and that adverse selection wasn’t inflating the cost of health insurance.

  • Email
  • Single Page
  • Print