Even the phrase “welfare state” makes most Americans uneasy. For conservatives and neoconservatives, it smacks of socialism or, worse, European influences. For many, welfare has become synonymous with public assistance and other programs thought to foster indolence. Hence George Gilder’s adage, “the poor most of all need the spur of their own poverty.” If many liberals and those further to the left support the welfare idea, they are unsure about its scope and proper clientele. Still, more than a few will agree with Michael Walzer that a welfare state “expresses a certain civil spirit, a sense of mutuality, a commitment to justice.” Even so, the question is still raised whether programs will be largely for the poor, or if other classes should get services that are free or subsidized.

Nor can the issue be resolved by saying both may benefit. Robert Kuttner’s The Economic Illusion and Neil Gilbert’s Capitalism and the Welfare State, both important books, grapple with this question. In this country, Kuttner points out, we tend to keep “citizens” and “clients” in separate systems.

One has only to consider the visual and procedural differences between a local welfare office and a local social security office to appreciate that the recipients of middle-class social entitlements are treated as citizens, while welfare clients are presumed chiselers until proven otherwise.

Kuttner believes that a welfare state, properly conceived, must promote “the ideal of universalism.” That is, it should reflect and express “a common political community,” and do all it can to keep “the poor and the middle class within one system.” Thus citizens of every income level will attend the same medical clinics and send their children to common schools. One problem, as Kuttner acknowledges, is that for this universalism to take effect, the poor must be so upgraded that they will no longer be seen as a class apart. If we gave truly generous stipends to women heading households, they would cease being “welfare mothers,” and have the same status as, say, medical students holding scholarships. Kuttner cites unemployment benefits which, in the United States, peak at about 40 percent of wages previously received, whereas in Denmark and Sweden such compensation can reach 90 percent.

Compared with other countries, America has not used the welfare state to try to change the class structure. The Economic Illusion argues that because of a “flatter tax system and a more meager program of income transfers, US public spending has had relatively little effect on the final income distribution.” We have programs to ensure that the poor do not starve, but not enough to provide children with the shoes they need for school. Even now, we offer the homeless little more than a bowl of stew and a cot for one night.

Neil Gilbert takes the question further in his book, proposing to explain how the universal idea has been perverted in practice. He agrees that “one of the most forceful claims for universal entitlement to social services is the social solidarity inspired by government’s ministering to the common needs of all citizens.” Unfortunately, most middle-class Americans do not believe they share “common needs” with those now assisted by lower-tier programs. They feel no desire to be identified with welfare or its services, let alone such a state. Indeed, few can imagine falling so far as to need food stamps or outpatient medical care.

At the same time, better-off citizens have been availing themselves of other, quite costly, benefits. In one sense these, too, could be construed as “universal,” in that any American who owns a pleasure boat can call on the Coast Guard for assistance. In practice, of course, subsidies to higher education and cultural institutions serve the middle class more than the poor, as do Social Security and Medicare. Gilbert notes that since the middle class now comprises “the expanding client base of social services,” their claims to entitlements have “resulted not in strengthening the welfare state but in eroding its position.” So we have a weakened welfare system for the poor, and something else—not yet named—for everyone else.

In fact, what has happened is neither a perversion nor the betrayal of an ideal. In America, at least, a two-tiered arrangement was intended from the start. While one can trace its origins back to Bismarck, the American application of a “safety net” began with the New Deal. Government would do something to alleviate suffering caused by economic downturns, personal misfortunes, and impecunious old age. Theda Skocpol’s research into the origins of Social Security make clear that “social solidarity” was never the aim. Frances Perkins of New York and Edwin Witte from Wisconsin, who guided the 1935 law through,

believed that open-ended governmental handouts to citizens must be avoided…. In their view, temporary “relief” payments to unemployed people, and minimal “public assistance” programs for dependent children and old people, had to be kept entirely separate from “social insurance” programs that workers would earn as a matter of right through regular tax contributions from themselves or their employers.1

Social Security pensions would be a “right” because they are based on contributions a citizen made, which in turn reflect earnings during one’s years of employment. They would be considered “insurance” as well, because all benefits flow through a “trust fund,” suggesting that some sum has been banked in each pensioner’s name.


The status of Social Security would also be enhanced if its funding at the federal level were retained and lesser clienteles were relegated to the states. So far as public assistance is concerned, one-half to two-thirds of the payments would come from the states, which are allowed to set their own benefit levels and eligibility rules. So today, whereas the same Social Security benefits are available in Alabama and Alaska, annual stipends from Aid to Families with Dependent Children (AFDC) average $1,074 in Mississippi and $1,352 in Tennessee, and go as high as $5,061 in Vermont and $5,100 in Minnesota. Similar variations are found with Medicaid, which is also administered by local agencies.

Consonant with this scheme, Social Security added Medicare to its menu, on condition that it too be construed as a form of “insurance,” with its own “trust fund,” and so set up that middle-class citizens may use it with their personal physicians. Social Security’s “supplementary social insurance” programs also have respectable clienteles, including the disabled, now numbering some 2.5 million, who had work records prior to their illnesses or accidents. (Hence the outcry when the Reagan administration tried to prune their rolls.) In the same way, elderly people who end up in nursing homes are allowed to dissociate themselves from the poor, who draw on other parts of the Medicaid program. Given this two-tiered tradition, Skocpol concludes, we have at best had an “incomplete welfare state” from the start. In fact, she suggests, “the term has an inappropriate ring about it for the US case.”

From some perspectives, not least the most recent election returns, the two-class arrangement may seem firmly in place, with continued erosion in store for the lower tier. Still, there are forces at work which cannot help but alter the scope and structure of public provision. Decisions are going to be made, but not always through normal political channels; or guided by current social values or ideologies. One such trend is the changing pattern of age groups in the population, in which the capacities of people will be as important as their numbers. Other developments are new technologies and canons of professional conduct, along with expanded conceptions of personal rights. These forces of course intertwine, but not always in expected ways. And while economic costs will perforce be important, this need not mean choices will be based on deciding what the country can “afford.” Several recent books and reports give us a glimpse of some claims and demands we can expect for new kinds of care.


Kate Quinton’s Days describes a year in the life of an eighty-year-old Brooklyn widow. A book about Mrs. Quinton is justified if only to assist us in sorting out our thoughts about people in her position. Between 1970 and 1980, the number of Americans her age or older grew by 36 percent, more than three times the rate for the general population. Mrs. Quinton, while not in the best of health, is still not chronically ill and can stay in her own apartment. Susan Sheehan shows what a city like New York is prepared to do for at least one of its elderly inhabitants. By my count at least eight different agencies were involved in aiding Mrs. Quinton, including the Nursing Sisters Home Visiting Service, a Transitional Community Placement Office, and the Medical Assistance Program of the city’s Human Resources Administration. During a five-month period, yet another service, the Home Attendant Program, dispatched no fewer than fifteen different women to look after Mrs. Quinton. Most of them were from the Caribbean and received only the minimum wage, which helps to explain the turnover. At the end of the book, Kate Quinton is in a wheelchair in her apartment, watching a soap opera with her latest attendant. “Time was passing as agreeably as she could expect.”

According to Susan Sheehan, home attendant care for someone like Mrs. Quinton cost the taxpayers $12,963 in 1982, over and above her own outlays for food, rent, and utilities. In that year, New York City had 26,400 such cases, costing $279.3 million. (By 1984, the list had grown to 46,900, with the budget commensurately higher.) Home care, although it costs close to three times Mrs. Quinton’s Social Security pension, is still cheaper than a nursing home. But if we are inclined to feel that home aides should receive rather more than the minimum wage, then the costs of this service would have to go up, as they would if we made it more widely available. Robert Kuttner cites Sweden, where home care is “universally available.” The Swedes employ one such aide for every 260 citizens, with the program’s staff comprising 2 percent of the national work force. If the United States wished to reach such a ratio, we would have to expand the rolls of our home-care personnel by twenty-five times.


Since 1963, the number of older Americans in nursing homes has grown from 448,000 to 1,316,000, a rise of 194 percent, almost four times the increase of the aged population. Charges depend a lot on location, sponsorship, and type of care. Few residents have enough in income or assets, so Medicaid (the program for the impoverished) pays most of the bills. A recent New York Times article described an eighty-eight-year-old woman who has spent the last thirteen years in one Manhattan nursing home. Far from being bedridden, “she keeps busy by taking calisthenics classes, singing in the choir, and performing in the drama group, where she sang the lead in ‘The Sound of Music.’ “2 We can agree that this is marvelous: we should all live so long, and have such an active life. It remains only to add that, through Medicaid, the rest of us are paying over $45,000 a year for this particular woman’s care. The question is not whether the United States can “afford” sums of this magnitude for the millions of people moving into their eighties. Rather, it is whether we will decide this is what we want to do.

Susan Sheehan reminds us that public provision for the aged “expects nothing of the person’s children.” Even well-off adult sons and daughters need not chip in, let alone keep their parents in their homes. Moreover, the census notes that because of declining fertility, “elderly persons will have a smaller number of living relatives, including brothers and sisters, as well as children and grand-children,” to look after them. Finances aside, time consumption and emotional strains figure greatly in caring for (and about) aged parents. In the past, adults were less apt to move into their forties and fifties still shouldering the burdens of being children. If that adjunct to adult life is increasingly common, it is by no means clear that most can claim great success in their efforts to handle it.

Currently, 27 percent of the federal budget is spent on the aged, compared with only 2 percent in 1940. The Population Reference Bureau projects that by the year 2025, only a generation away, people over sixty-five will absorb half of all federal outlays.3 Not only that, there will be fewer younger people to pay all those bills. By 2025, every 1,000 Americans aged eighteen through sixty-four will be called on to support 333 of their seniors, close to double the current load. Expenditures on and for older citizens fall under two major headings: Social Security pensions, and physicians’ and hospital charges under Medicare and Medicaid. I will touch on the latter first, if only because they are growing at a faster rate.

In 1967, when Medicare began in earnest, the program spent twenty-two cents for every dollar paid in pensions. By 1983, the Medicare ratio had grown to thirty-eight cents. By 2005, according to Barbara Torrey, a Census Bureau economist, “Medicare outlays for older people will be larger than Old Age Survivors Insurance outlays.”4 That is twenty years away. As it happens, most people in their sixties, and many in their seventies, are in passably good health. Only 27 percent of the 30 million people enrolled in Medicare spent any time in a hospital during 1983. The real problems come with advanced old age. And while everyone eventually dies, we are taking longer to do it.

Edward J. Schneider and Jacob Brody of the National Institute of Aging point out that despite greater longevity, illness rates for the elderly have not changed. In other words, even with advances in health and medicine, there are no greater signs of robustness among people in their eighties. But because more of us are reaching that stage, “more people will spend larger proportions of their lives afflicted with chronic diseases.” More than that, they continue, “the average period of diminished vigor will probably increase” and “we will be faced with a burgeoning number of patients in need of long-term care.”5

There is already a good deal of talk, and there will be a lot more, about containing medical costs. Last year, the typical short-term hospital bill for people covered by Medicare came to $4,485, plus another $894 to reimburse physicians’ fees. It has been proposed that fewer tests be administered, that many operations are not necessary, and more procedures can be performed in doctors’ offices. Many such changes will be coming, but how far they will reduce health-care outlays remains to be seen. The simple fact, as Schneider and Brody stress, is that because of longer life spans, “a huge proportion of the population will be suffering from chronic diseases.”

Moreover, more older people will develop diseases that would not have appeared had they died earlier of more conventional causes. The National Cancer Institute estimates that by the year 2030 the country will have twice as many cancer cases, with most of the increment coming from aged citizens.6 Quite obviously, these people will have to be treated—even if they are in their eighties or older—at not-insignificant costs. If a cancer can be arrested, or a heart bypass implanted, the doctors, according to current American standards, are not allowed to withhold those procedures and thus let the patient die. Even so, many of these turn out to be terminal cases. Currently, 28 percent of the Medicare budget is expended on patients’ care during the final year of their lives. Granted, in some instances, these treatments begin with the hope of sustaining life. But past a certain point, whatever is done simply prolongs dying. Two months of such terminal care can run up bills exceeding $100,000. Great Britain, which views itself as poorer, bans certain procedures for people past specified ages. Our dilemmas over morals and public policy have been exacerbated by the fact that we see ourselves as the richest nation in the world, and therefore should not deny ourselves whatever treatments are available.

The Population Reference Bureau takes a different tack in its study, Death and Taxes. As an exercise, it projects what would happen were everyone to die of “natural causes” at their “appointed old age,” rather than from cancer, heart disease, or stroke, as is now commonly the case. If these causes are eliminated, the bureau calculates, the resulting longevity would add another $100 billion to our annual pension bill. While this is only a model, progress in prevention and cure are proceeding apace. After all, it was the victory of antibiotics over pneumonia that made Social Security abandon its original assumptions about life span.


According to Michael Harrington, “the welfare state in the United States is primarily for people over sixty-five, most of whom are not now, and for a long time have not been poor.”7 This statement deserves some examination. Pensions are certainly more generous than ever in the past, with indexing sustaining their purchasing power. Harrington’s view that most older people are not poor can certainly be supported by a look at marriages where the husband is over sixty-five. In 1982, the most recent year for these figures, the average income for such couples came to the quite comfortable sum of $21,562. By comparison, the figure for families under sixty-five was $31,585, somewhat higher but then most younger households have children to support. Social Security pensions can now rise as high as $10,000 for those who maintained the top earnings level throughout their careers.

Moreover, each year there are more two-pension families. Among the couples who retired in September of 1984, about 70 percent of the wives were slated to receive work-based pensions of their own. The census also reports that 7.9 million retired families had private or civil-service pensions; and 72 percent of men and 67 percent of women over sixty-five have some “property income.” Finally, among families where the husband is sixty-five or older, fully 88 percent own their own homes, most of which are paid for and in many cases eligible for reduced taxes. In all, then, less than 8 percent of married persons over sixty-five are below the poverty level, less than half the rate for the rest of the population.

However, this is only one part of the story. As more people live longer, two tiers are emerging among Social Security recipients. Older retirees amassed their work records when wage levels and contributions were considerably lower. As a result, their benefits are smaller compared with those going to people currently retiring: payments to persons aged eighty-five or older average about 20 percent less than those for people sixty-five to sixty-nine. In addition, more than half of all workers are now retiring before they reach sixty-five, with the stipulation that they will receive smaller pensions for the rest of their lives. Between 1961 and 1984, the proportion of men filing for pensions before they reach sixty-five rose from 5 percent to 59 percent.

Social Security works rather well so long as people share a roof with somebody else. In September of 1984, a typical couple averaged $10,554 in tax-free benefits if both spouses had been employed and $8,964 if only the husband had worked. The problems set in when one partner dies, at which time what had been a dual pension suffers a severe cut. Needless to say, the men usually go first. Among women aged sixty-five to seventy-four, at least 49 percent still have resident husbands; but with those over seventy-five, only 24 percent are still married or have remarried. Not the least burden for widows is that their pensions continue to be based on what had been their husbands’ incomes. Thus the longer they survive, the less adequate that base becomes, even with cost-of-living adjustments. The pensions received by widows in the fall of 1984 averaged $4,771, putting them just below the poverty line. Indeed, while the total poverty rate in 1983 came to 15 percent, among widows and other older women living alone almost twice as many—29 percent—were below that threshold.

Here, too, we confront the question of how much we want to do for our old people. This year a two-job couple at the top contribution level will pay $5,584 in Social Security taxes, to help provide pensions for those already retired. It seems unlikely that a bill could pass charging such couples even higher rates so that surviving widows might live more comfortably. Samuel Preston, who heads the Population Studies Center at the University of Pennsylvania, points out that “expenditure on the elderly is almost exclusively consumption expenditure, in the sense that it does not appreciably affect the future productive capacity of the economy.”8

At the same time, senior citizens are not oblivious to their own interests. According to a recent census study, more of the elderly turn out to vote than any other age group. In the 1984 elections, 72 percent of those between sixty-five and seventy-four took part, compared with 55 percent in the twenty-five to thirty-four age group. 9 This kind of turnout helps to explain why no one proposes serious cuts in benefits, even if there is no prospect of raising them. Yet here, too, two tiers come into view. Tax breaks for people who set up Individual Retirement Accounts will widen the class spread among Social Security recipients: those middle class and better, with outside incomes, will be increasingly better off than those who must get along on whatever the benefits happen to be.


Unlike the terminally ill, newborn babies have their lives ahead of them. Most would agree that every effort should be made to save infants whose survival seems in jeopardy, especially when hospitals can now reclaim lives that would otherwise be lost. Robert Weir’s Selective Nontreatment of Handicapped Newborns takes us into this world of intensive—and expensive—care, where pediatric surgeons practice the state of their art. On one side are doctors who “tend to view anomalous newborns as living tragedies that should have been terminated prior to birth.” Given the availability of amniocentesis, a defective fetus can be detected, with abortion the next option. Some physicians recall earlier times, when such infants died within days from what were then uncontrollable causes. They also know that when deliveries took place at home, severely handicapped babies were often dealt with by the doctor, who told the mother it was stillborn. This is no longer possible in crowded operating rooms, which would require, in Weir’s phrase, “group participation in the act of intentional killing.”

But the book centers on those physicians who pursue “an unbridled aggressive approach,” in which “all seriously defective infants are treated for an indefinite period of time, regardless of patient suffering or the apparent uselessness of the treatment.” Such was the case with one baby girl who had a debilitating bowel obstruction and underwent four operations, in the end unavailing, during her first four months. Weir tells of parents who only late discovered that their “physician’s enthusiasm to treat their daughter” did not mean he had any real hope that the surgery would be successful. He just looked forward to the challenge. In other instances, “seriously defective newborns may be given treatment that will not benefit them, but will create research and teaching opportunities for the clinicians in charge of their cases.”

Still, many babies can be saved, which raises the issue of “quality of life.” At this point, our collective conscience does not allow us to ask whether the existence a child will lead is one worth rescuing. Once we start deciding that some human beings are nuisances or burdens—or too costly—we are on a slope that has no established stopping places. Weir notes that because we can outwit nature as never before, we are consciously saving children we know will be burdensome.

According to one recent report, the number of “infants born with physical and mental disabilities has doubled since the peak of the baby boom, because technology allows these infants to live.”10 Only five years ago, half of all infants born with herpes perished. Now almost all survive, the majority mentally retarded. Weir says that about 200,000 newborn babies get high-technology treatment each year, with costs averaging $8,000, easily rising to five times that figure. Hospital charges for these babies alone came to $1.5 billion, and this does not count the extra care, much of it highly individualized, that must come later.

The “Baby Doe” rules proposed by the present administration would protect, indeed encourage, Weir’s “aggressive” physicians. “Baby Doe” died because his parents would not authorize corrective surgery and said they did not want him fed. Under the rules, parents no longer have this power. A sign reading “Discriminatory Failure to Feed and Care for Handicapped Infants is Prohibited by Law” must be posted in all hospitals. In addition, every possible procedure must be used so long as the infant is not “irreversibly comatose.” Treatment may cease only when further interventions are deemed “futile” by all on hand. A further section stresses that the “quality of life” a child would lead, were it to survive, may not figure in clinical decisions. At this point it remains to be seen how the rules will be interpreted and enforced. While the signs remain posted, there are also signs that the Surgeon General, himself a pediatric surgeon, wants to play down his police function.11

Weir’s proposal is for hospitals to set up committees, which will review alternatives and reach (or recommend) decisions as each case arises. In addition to nurses and physicians, these bodies would include a social worker and a lawyer, plus a “patient advocate” to plead the baby’s case, and a “trained ethicist” to ensure the proceedings adhere to “consistent moral reasoning.” However, he makes no attempt to forecast whether such committees will intensify or decrease efforts to save handicapped newborns.


Since 1970, the number of women raising children on their own has more than doubled, while intact marriages with children have declined. Hence the question of how much assistance should be available to single mothers. Women and the children they care for account for three-quarters of the 35,266,000 Americans officially listed as poor. Before examining what is currently being done, it would be well to clear the air on why we have so many fatherless families. While there are obviously many reasons, a few deserve special emphasis. Households without husbands can be divided into those where the mother was once married and those where she was not.

Most important, certainly, is the fact that fewer fathers feel committed to stay with the children they have sired. This is all the more the case when no marriage has taken place. When married couples have children, moreover, it is usually the husband who proposes the divorce. (In younger, childless, marriages, the suggestion to split is apt to come from the wife.) These husbands feel they have the right to undo a mistake, indeed start a second life. Too much can be made of low earnings as a cause of husbands’ leaving. Temperament is what matters most. Many men with poorly paid jobs stick with their marriages, while the second-life syndrome is certainly common at higher economic levels.

Divorced mothers are less apt to remarry. This is not to say that women want or need another husband. Still, living on your own means you no longer share in a male income. Our most recent income figures, for 1982, show that married couples with children averaged $30,357, while families headed by mothers finished the year with $11,506. And the latter sum includes child support, such as it is. Most of these mothers were not awarded payments, or else the stipulated sums failed to arrive. The sexual imbalance is exacerbated by the fact that most divorced fathers desire a younger partner the next time around. Among divorced mothers in their thirties, four out of ten never marry again; for those in their forties, two-thirds never do.

However, the fastest growing source of fatherless households consists of young women who are not and never have been married. They become mothers because there is more premarital sex than there was in the past, a lot of it without contraception, while young men who participate in these pregnancies do not feel marriage must follow. In the past, most boy-friends did the “right thing” with a hasty trip to the altar. Today, 19 percent of all the nation’s births occur outside wedlock, compared with less than 5 percent in 1955. Moreover, for the past several years, the rate of illegitimate births for whites has been rising, while that for blacks has in fact been falling. (The current rate would be considerably higher were it not for the incidence of abortion.)

But the biggest change of all has been that no fewer than 96 percent of unmarried mothers choose to keep and raise their babies, whereas a generation ago only about one in five did.12 (This is why there are hardly any American-born infants—of any race—available for adoption.) Most of these young mothers join the public-assistance rolls, which means society foots the bills arising from the choices teen-agers are allowed to make. Just how the “right” to keep one’s child came about is an interesting question. On the one hand, conservatives could support the tradition that says a baby should remain with its natural mother. Reinforcing that right, however, was a new generation of social workers, who had been taught not to dominate their clients but to treat them as equals. It seems unlikely that we can return to earlier practices when an illegitimate birth was evidence of unfitness for motherhood. So the question is what can be done to ensure that these mothers and their offspring have a chance at decent lives.

There are pilot programs, like one in New York City that enrolls teen-aged mothers at a special school, complete with an on-site nursery. However, the cost of $4,400 per pupil suggests it will not get beyond the showcase stage. Most welfare benefits are doled out grudgingly. In 1983, stipends for a parent with two children averaged $3,673, less than half the poverty income for a family of three. Moreover, states have shown little inclination to adjust payments for inflation. Between 1975 and 1983, the purchasing power of welfare checks fell by 30 percent, although the increasing use of food stamps made up some of that gap. One other mitigating factor is that families receiving help from AFDC are smaller. There are now 193 children for every 100 such households, compared with 227 in 1975.

As it happens, since 1975, the number of families assisted under AFDC has remained relatively stable, as has the proportion of the nation’s children—about 11 percent—supported by the program. During the Reagan years, the rolls dropped slightly, from 3,712,000 in 1980 to 3,666,000 in mid-1983. The main reason for the decline has not been a ruthless purging of the rolls, but rather the use of abortion, which has also kept them from rising. According to the most recent figures (1982), there were 1,577,000 abortions, as against 3,704,000 live births. Moreover, almost three-quarters were performed on unmarried women, many of them teen-agers, who had no desire to embark on motherhood.

In most urban centers, hospitals and clinics continue to accept everyone who comes through the door, despite cutbacks in federal funding and the risk of explosives. One study estimates that 53 percent of all teen-agers faced with unintended pregnancies end them with abortions.13 Why some sixteen-year-olds decide this way and others don’t is not readily answered. Of two school friends from comparable backgrounds, one will end the pregnancy while the other will choose to have and keep her baby. Their decisions depend far more on highly personal factors than on those of class or race.

Among mothers who end up divorced, more than two-thirds begin or continue working. Their appearance on the labor scene has given prominence to two demands: wages keyed to “comparable worth” and subsidized day-care services. The former is really a new version of an ancient argument: that of the “just wage,” first propounded by St. Thomas Aquinas. At Yale University, for example, most of the clerical and technical positions are filled by women and, before the recent strike settlement, they were paid an average of less than $14,000 a year. That was considered a just enough wage when virtually all of these women had husbands who earned somewhat more. Working wives brought home a useful supplement, allowing middle-class comforts. However, today a higher proportion of working women are raising children on their own; and for them $14,000 is hardly sufficient. At Yale and elsewhere, women have demanded that each job be rated according to the skills it requires, plus other relevant factors, and then payment be governed by that index. Women’s wages are currently under-valued; only with such a formula will they receive wages related to their “worth.”

In fact, the real issue is need. Women have generally been paid less because they would work for lower wages, since they had no urgent need for more money. Either they were married, or single and living at home, or doubling up with friends. Men, most of whom were principal earners, pressed for a “head of household” wage, which meant enough to support a family. Most of the jobs men have held are not more responsible or arcane than those held by women (truck driver versus nurse, railroad conductor versus teacher). Robert Kuttner remarks that “there is nothing intrinsic about assembling cars, mining coal, or pouring molten steel that dictates high wages.” Not surprisingly, women who head house-holds are following the path of their male predecessors. Younger women are increasingly taking hitherto male jobs, since that is where the money is.

For older women on their own, however, this route is not so easy. Hence their call for “comparable worth,” which is really another way of saying, as their husbands once did, that they need more money because it now falls on them to provide for their children. If Yale’s laboratory technicians are to be paid not so much what they are worth as what they need, then its male professors (who last year averaged $52,200) may have to make do with somewhat less. A society that countenances widespread divorce should be prepared to relinquish a fair proportion of its income to the parents—mainly women—who are left with the children.

Which leads to day care, a service wanted by many working mothers, whether married or not. Unfortunately this is a subject about which we have very little coordinated information. The census reports that in 1982 there were 5,086,000 employed single mothers who had at least one child under the age of five.14 Of these women, fewer than 15 percent placed their children in “group care centers,” while the rest left them at their own or someone else’s home or took them to work. However, another census study shows that 2,624,000 three- and four-year-olds (36 percent of their age group) are already “enrolled in schools.”15 Unfortunately, the census does not sort out how many of these schools are also “group care centers” cited in the earlier survey. Further figures, this time from the Bureau of Labor Statistics, indicate that 51 percent of the mothers who have children in “preprimary schools” have jobs of one kind or another, while the other 49 percent are not employed at all.16 In other words, half the available places in these schools are taken by children of nonworking women, while the majority of working mothers must leave their children elsewhere. Class is a major factor here. Among the working mothers who manage to find day-care centers, those with professional positions outnumber blue-collar and service workers by two to one.

Day care is expensive, whether publicly provided, subsidized, or fully paid for by the parents. Wealthier school districts can afford to set up nursery programs, but they do not usually do so for the benefit of working parents. There is also the much-debated question of “quality.” Profit-making centers—now a growth industry—must foot all their own bills and still keep fees down if they want to find a clientele. Franchise operations like Kinder-Care and Children’s World tend to hire staff at the minimum wage, and such people are unlikely to have middle-class credentials. “Quality” care, apart from proper nutrition and sanitation, tends to mean a low proportion of staff to children, plus some sort of educational activity. Neil Gilbert reports that to provide “an environment that nurtures emotional and intellectual development,” the American Academy of Pediatrics says there should be one staff person for every four children. The Child Welfare League of America prefers a two-to-one ratio. If aides are to have master’s degrees in child development, as has also been proposed, they will want considerably more than $3.35 an hour.

Just how much day care costs is difficult to calculate. In 1982, the federal Head Start program reported expenditures averaging $2,300 for the 396,000 youngsters it served. However, many of its overhead costs are paid by the sponsoring institutions. A recent Newsweek survey estimated that real day-care costs typically come to $5,000 per child, while Marian Blum found charges of $7,800 for children in nursery schools in Boston, and $8,200 in New York City.17 Needless to say, few parents can afford such prices, which are not far from those of Ivy League tuitions. One would suspect that the children of professional couples fill most of the places.

To provide “quality” child care for all working parents who desire it would clearly be costly. Even with scales based on ability to pay, many middle-class families would still need public subsidies. Of course, this is what we do with our state universities, where students pay $1,250 for an education costing several times that sum. It remains only to add that assistance for day care would also subsidize divorce by letting a lot of fathers off the hook.

There are still 21,676,000 nonworking wives, most of whom stay home because they or their husbands or both of them want it that way. Neil Gilbert, in Capitalism and the Welfare State, suggests they have chosen “a lower level of material consumption in exchange for more personalized care and home-centered activities.” To induce more women to continue to stay at home, Gilbert proposes a “family social credit scheme,” wherein the federal government would award homemakers free college or technical training once their children were grown. They could also be awarded preference for civil-service posts, as we do with veterans. Such a program, Gilbert feels, would invest “the role of unpaid domestic labor with formal status” and support “the values of parenthood and traditional family life.”

However, twenty years of unpaid domestic labor should be worth more than a college scholarship. Michael Minton, a Chicago divorce lawyer, figures that work done within the home would, if factored in, account for one-third of the gross national product. He writes for aggrieved wives, who want suitable recompense from their departing husbands. To this end Minton provides charts showing what various wives may be worth. A mother with two preschool children provides “yearly value” of $44,436, which is what it would cost to hire outsiders to assume her tasks. A seamstress is figured at $3.20 an hour, while a laundress might be obtained for $2.80. He estimates that sessions with a “family counselor” at a $25 rate would be needed 120 times each year.

Minton is semiserious. He realizes that talk of wages for housewives exists mainly to make a point. For example, there is the behind-every-man argument, which claims that a full-time spouse so invigorates her husband that he not only succeeds in his career, but also augments the nation’s wealth. (Of course, it would be interesting to learn whether nonmarried men and those with working wives are any less energetic.) Even so, instead of salaries, Minton advises his reader to think of creating capital: “Every load of laundry she hauls down to the cellar and back is…building up credits and contributing to family assets”—on which she may someday wish to make a claim. As it happens, our society pays one group of women for staying home with their children. These are mothers who receive public assistance because they choose not to go to work. The only hitch, as has been mentioned, is that the “wages” given them remain well below the poverty line.


Perhaps the most coherent call for enlarging social benefits comes in the recent report on the nation’s economy by a committee of Catholic bishops. Disparities in our economic system, they found, “are among the greatest in the Western industrialized world.” Far too many Americans are poor, while too much of the nation’s wealth ends up with the rich and the upper middle class. Catholic social teaching has always held that “gross inequalities are morally unjustifiable”; in the bishops’ view, the American system “violates this minimum standard of distributive justice.”

The report relies heavily on Scripture, recalling that Jesus said, “Woe to you that are rich” and “exalted those of low degree.” The Catholic tradition is basically communitarian, and hence uneasy about competition that cleaves societies into winners and losers. For the bishops who prepared the report, the détente between Catholicism and capitalism is apparently wearing thin. “Support for private ownership,” they say, “does not mean that any individual, group, organization, or nation has the right to unlimited accumulation of wealth.” While they do not specify what constitutes too much, I presume they would find $2 million salaries for corporate chairmen out of line and question the $2 billion holdings of those at the top of the Forbes list.

Indeed, while the title refers to the economy, the report hardly mentions productivity, public deficits, or the balance of foreign trade. In common with most writing on the subject by the left, it is concerned chiefly with distribution, even of a smaller pie. So the bishops begin by stating “the poor have a special claim on our concern.” Nowhere do they say that the way to cure poverty is by giving business greater freedom.18 If we want to reduce unemployment to half its current rate, it will have to be done by “direct job-creation programs” with government funds.

While the bishops do not applaud public assistance as a way of life, so long as we sponsor such programs, they should “avoid stigma” and “show respect for clients.” They also criticize variations in AFDC stipends, and urge “a national minimum-benefit level,” presumably to raise Mississippi to the Minnesota thresh-old. More than that, parents who so wish should be encouraged to stay at home: public agencies do “not have the right to decide that employment outside the home is the appropriate or preferred course.” But the bishops are open-minded; mothers who want to work are entitled to “improved child-care services.” While family planning and abortion cannot figure on their list, neither does the report condemn either one.

Catholic Teaching and the US Economy can be read as a North American venture into “liberation theology.” The five bishops who participated in its preparation mainly represent Middle American dioceses: Milwaukee, Salt Lake City, Atlanta, Hartford, and St. Cloud, Minnesota. Just how far they reflect the views of their 275 colleagues will become evident at discussion sessions later this year. Certainly, the authors recognize that implementing their proposals would be tremendously expensive, calling for considerably higher taxes from the better-off. Apparently, this prospect does not trouble them. On the contrary, our more comfortable classes now make and keep more money than they need, the bishops believe, because they have succumbed to “excessive consumption.” For this reason, the report targets not just a wealthy few, but “the richest 20 percent,” which encompasses families with $40,000-a-year incomes. Americans would be spiritually better off, the bishops contend, if they cut down on personal consumption and agree to more public programs which would permit “the poor and the deprived…to live with dignity.”

Of course, a smaller sacrifice would be needed were we to spend less on arms. The bishops have hardly been the first to remark that “the investment of human creativity and material resources in the production of weapons of war…drains financial resources that should be dedicated to meeting human needs.” At the same time, an unstated premise of this position is that the Soviet Union is not as great a threat as is often made out. The bishops, or at least the five who signed this report, come across as less worried about communism than many of their communicants are.


In view of the growing proportion of the population that will need subsidies and assistance, it seems clear that providing for citizens who are not self-supporting may well become the nation’s foremost activity. With more Americans older and infirm, more children requiring special treatment, and more families fractured or unstable, one wonders whether we can also remain a presiding power on the world scene. Barbara Torrey entitled her analysis of the aged “Guns vs. Canes,” suggesting that there will really be no choice since we cannot abandon the elderly. Indeed, it is not clear that an aging nation can maintain a martial spirit. Simply in demographic terms, there will be far fewer young people of optimal military age. High-tech weapons make good threats; but an interventionist power must muster divisions.

The world has had many societies that stress military might, but very few that give primacy to welfare. By and large, those that do—like Holland, Norway, and New Zealand—are so small that they could not have much part in international power politics in any case. Moreover, any society that wants to guarantee comfort and dignity to a large dependent population will need habits of mind not generally characteristic of Americans. Patience and tolerance, for example, and the will to share the bishops’ view that “all persons have rights in the economic sphere” independent of their actual or potential contributions.

The biggest barrier, however, remains our two-tier system of social provision. Whether the future will find Americans more concerned about the poor remains questionable: tension and conflict over claims run deep in our tradition. All indications are that middle-class citizens will continue to want, need, and expect more medical benefits as they grow older, just as they now feel themselves entitled to generous pensions. Nor are there signs that they will see the poor as sharing their boat. When polled before the 1984 election, voters over sixty-five exceeded the national two-to-one margin in saying they felt people on public assistance were getting more than they needed. That sentiment tells us how far we are from such goals as “distributive justice,” “social solidarity,” and a “common political community.”

This Issue

February 28, 1985