1.

Except for a brief contraction in the early 1990s, the higher education system in the United States has been growing steadily since the late 1970s. Roughly half of all Americans now have attended college at some point in their lives, and roughly a quarter hold a postsecondary degree. (In the United Kingdom, by contrast, less than 15 percent of the population goes to university.) There are 14.5 million students in American colleges and universities today. In 1975 there were a little over 11 million; in 1965 there were fewer than 6 million. And yet when people in higher education talk about its condition and its prospects, doom is often in their voices. There are three matters these people tend to worry about: the future of the liberal arts college; the “collapse” (as it’s frequently termed) of the academic disciplines, particularly the humanities; and the seemingly intractable disparity between the supply of Ph.D.’s and the demand for new faculty. There are more college students than ever. Why does the system feel to many of the people who work in it as though it is struggling?

The fate of the liberal arts college, the decay of the disciplines, and the tightening of the academic job market present, on one level, distinct issues. The problems at the liberal arts college are chiefly financial; the problems in the humanities disciplines are chiefly philosophical (what does it mean to study “English,” for example); the problems with the job market are chiefly administrative—at some point, it seems, graduate schools will simply have to stop admitting more students than they can hope to place in permanent teaching positions. (Despite the consumer warnings about the job market that are now routinely issued to applicants by graduate admissions committees, between 1985 and 1997 graduate student enrollment increased by 27 percent.) The issues are related, though, and the easiest way to see why is to look at the system as a whole.

According to the Carnegie Foundation classification (the industry standard), there are 3,941 higher education institutions in the United States. Only 228 of these—5.8 percent of the total—are four-year liberal arts colleges that are not part of universities. Even in the major research universities (the schools categorized as Doctoral/ Research–Extensive in the Carnegie classification, including such schools as Harvard, Yale, and the University of Chicago), only half of the bachelor’s degrees are awarded in liberal arts fields (that is, the natural sciences, social sciences, and humanities). In fact, apart from a small rise between 1955 and 1970, the number of undergraduate degrees awarded annually in the liberal arts has been declining for a century. The expansion of American higher education has been centripetal, away from the traditional liberal arts core. The biggest undergraduate major by far in the United States is business. Twenty percent of all BAs are awarded in that field. Ten percent are in education. Seven percent are in the health professions. There are almost twice as many undergraduate degrees conferred every year in a field that calls itself “protective services”—and is largely concerned with training social workers—as there are in all foreign languages and literatures combined.

This helps to explain the apparent anomaly of a declining job market in an expanding industry. In 1970, nearly 25,000 students received bachelor’s degrees in mathematics (about 3 percent of all BAs), and 1,621 received bachelor’s degrees in fields categorized as “parks, recreation, leisure, and fitness studies.” In 1997, 12,820 students graduated with degrees in mathematics (only 1 percent of all BAs) and 15,401 took degrees in parks, etc. This is why math Ph.D.’s who wish to teach and do work in pure mathematics cannot find tenure-track jobs. It is not that there is no demand for college math teachers; it’s that there is much less demand for specialists in pure mathematics. The same thing has happened in English language and literature. In 1970, English majors took 7.6 percent of all BAs; by 1997, the figure was down to 4.2 percent, a drop in absolute numbers from 64,342 to 49,345. Literature courses are still taught, but the market for specialists is much smaller, since fewer undergraduates take classes beyond an introductory level.

The shrinking of the liberal arts sector (except in a few disciplines, notably psychology and the biological sciences, which produce more BAs now than they did twenty-five years ago) obviously has an effect on the disciplines themselves. Scholarship is, after all, largely a byproduct of the system designed to produce college teachers. People go to graduate school most often in order to acquire the credential they need to get a job teaching college students, and in order to acquire that credential, they are obliged to produce specialized scholarship under the direction of scholarly specialists. If (hypothetically) it were suddenly decided that the ability to produce specialized scholarship had no relevance to college teaching, and different requirements for the Ph.D. were instituted, academic scholarship would pretty much dry up. But that does not seem to be the anxiety that’s driving the so-called “collapse of the disciplines.” In order to understand the real extent of the transformation of American higher education, we have to go back fifty years.

Advertisement

2.

The history of higher education in the United States since World War II can be divided into two periods. The first period, from 1945 to 1975, was a period of expansion. The composition of the system remained more or less the same—in certain respects, the system became more uniform—but the size of the system increased dramatically. This is the period known in the literature on American education as the Golden Age. The second period, from 1975 to the present, has not been honored with a special name. It is a period not of expansion but of diversification. Since 1975 the size of the system has grown at a much more modest pace, but the composition—who is taught, who does the teaching, and what they teach—changed dramatically. This did not happen entirely by design.

In the Golden Age, between 1945 and 1975, the number of American undergraduates increased by almost 500 percent and the number of graduate students increased by nearly 900 percent.1 In the 1960s alone enrollments more than doubled, from 3.5 million to just under 8 million; the number of doctorates awarded annually tripled; and more faculty were hired than had been hired in the entire 325-year history of American higher education to that point. At the height of the expansion, between 1965 and 1972, new community college campuses were opening in the United States at the rate of one every week.

Three developments account for this expansion: the baby boom, the fairly sustained high domestic economic growth rate after 1948, and the cold war. The impact of the cold war on the growth of the university is well known. During the Second World War, educational leaders such as James Bryant Conant, of Harvard, and Vannevar Bush, formerly of MIT, instituted the system under which the federal government contracted its sci-entific research out to universities—the first time it had adopted this practice. Bush’s 1945 report, Science—The Endless Frontier, became the standard argument for government subvention of basic science in peacetime. Bush is also the godfather of the system known as contract overhead—the practice of billing granting agencies for indirect costs (plant, overhead, administrative personnel, etc.), an idea to which not only many scientists but also many humanists owe their careers. This was the start of the gravy train that produced the Golden Age.2

In 1958, as a response to Sputnik and concerns about a possible “technology gap,” Congress passed the National Defense Education Act, which put the federal government into the business of subsidizing higher education directly, rather than through government contracts for specific research. The act singled out two fields in particular as targets of public investment—science and foreign languages—thus pumping up two distinct areas of the academic balloon. The act was passed just before the baby boom kicked in. Between 1955 and 1970, the number of eighteen- to twenty-four-year-olds in America grew from fifteen million to twenty-five million. And the entire expansion got a late and unintentional boost from the military draft, which provided a deferment for college students until 1970. The result was that by 1968, 63.2 percent of male high school graduates were going on to college, a higher proportion than do today. This is the period when all those community college campuses were bursting up out of the ground. They were, among other things, government-subsidized draft havens.

Then, around 1975, the Golden Age came to a halt. The student deferment was abolished and American involvement in the war ended; the college-age population stopped growing and leveled off; the country went into a recession; and the economic value of a college degree began to fall. In the 1970s the income differential between college graduates and high school graduates dropped from 61 percent to 48 percent. The percentage of people going on to college therefore began to drop as well, and a system that had quintupled, and more, in the span of a single generation suddenly found it-self with empty dormitory beds and a huge tenured faculty. This was the beginning of the long-term job crisis for American Ph.D.’s, and it was also the beginning of serious economic pressures on the liberal arts college. From 1955 to 1970, the proportion of liberal arts degrees among all bachelor’s degrees awarded annually had risen for the first time in this century; after 1970, it started going back down again.3

The rapid expansion that took place in the 1960s helps to explain the second phase in postwar American higher education, the phase of diversification. The numbers are not complicated. In 1965, 62 percent of students were men and 94 percent were classified as white; by 1997, 45 percent of students were men and 72 percent were classified as non-Hispanic whites. In that year, 1997, 45,394 doctoral degrees were conferred; 40 percent of the recipients were women (in the arts and humanities, just under 50 percent were women), and only 63 percent were classified as white American citizens. The other 37 percent were nonwhite Americans and foreign students.

Advertisement

Faculty demographics changed in the same way, a reflection not so much of changes in hiring practices as of changes in the group that went to graduate school after 1975. Current full-time American faculty who were hired before 1985 are 28 percent female and about 11 percent nonwhite or Hispanic. Full-time faculty who have been hired since 1985—that is, for the most part, faculty who entered graduate school after the Golden Age—are half again female (40 percent) and more than half again as nonwhite (18 percent).4

There are a number of reasons why more women and nonwhite Americans, not to mention more non-Americans, began entering higher education in greater proportionate numbers after 1970, but one of them is purely structural. After 1970, there were fewer white American males for selective schools to choose from. The absolute number of white male American high school graduates going on to college was dropping, and, thanks to the expansion of the 1960s, the number of institutions for those who were going to college to choose from had grown. So colleges and universities sought new types of students. After 1970, virtually every nonmilitary all-male college in the United States went co-ed. The system had overexpanded during the Golden Age. Too many state-subsidized slots had been created, and one result was a much higher level of competition among colleges to recruit students. People had talked before 1975 about the educational desirability of coeducational and mixed-race student bodies, but as Elizabeth Duffy and Idana Goldberg demonstrate rather dramatically in Crafting a Class, their study of admissions policies at sixteen liberal arts colleges in Ohio and Massachusetts, in the end it was economic necessity that made them do it.5

3.

The appearance of these new populations in colleges and universities obviously affected the subject matter of scholarship and teaching. An academic culture that had, for the most part, never imagined that “women’s history” or “Asian-American literature” might constitute a discrete field of inquiry, or serve to name a course, was suddenly confronted with the challenge of explaining why it hadn’t. The challenge led to a good deal of what might be called “antidisciplinarity”—work that amounted to criticism of the practices and assumptions of its own discipline. There is no doubt that this work, by feminists, students of colonialism and postcolonialism, nonwhites, gays, and so on, tended to call into question the very idea of academic disciplines as discrete and effectively autonomous fields of inquiry. But the questioning of the traditional assumptions of academic work, particularly in the social sciences and humanities, that took place after 1975 was only adding fuel to a fire started from other causes.

One of the persistent peculiarities of the debate over higher education that has been underway, off and on, since the late 1980s—the debate over multiculturalism, political correctness, affirmative action, sex and gender studies, and so on—is the assumption of many of its participants that the university of the 1950s and 1960s, the early cold war university, represents some kind of norm against which recent developments can usefully be measured, positively or negatively. The great contribution Thomas Bender and Carl Schorske have made in the recent collection they edited, American Academic Culture in Transformation,6 is to show how exceptional, and in some ways artificial, that earlier period was. For once the funding for academic research began coming from the state, and once “science” became the magic word needed to secure that funding, the paradigms of academic work changed. Analytic rigor and disciplinary autonomy became important to an extent they had not before the war. To put it another way: scholarly tendencies that emphasized theoretical or empirical rigor were taken up and carried into the mainstream of academic practice; tendencies that reflected a generalist or “belletrist” approach were pushed to the professional margins, as were tendencies whose assumptions and aims seemed political.

As Bender suggests, many scholars eschewed political commitments because they wished not to offend their granting agencies.7 The idea that academics, particularly in the social sciences, could provide the state with neutral research results on which pragmatic public policies could be based was an animating idea in the early cold war university. In the sciences, it helped establish what Talcott Parsons called the ethos of “cognitive rationality.” In fields like history, it led to the consensus approach. In sociology, it produced what Robert Merton called theories of the middle range—an emphasis on the formulation of limited hypotheses subject to empirical verification. Behaviorism and rational choice theory became dominant paradigms in fields like psychology and political science. In fields like literature, even when the mind-set was anti-scientific, as in the case of the New Criticism and structuralism, the ethos was still scientistic: theorists aspired to analytic rigor. Boundaries were respected and methodologies were codified. Discipline reigned in the disciplines. Scholars in the 1950s who looked back on their pre-war educations (some of these contribute their reflections to Bender and Schorske’s volume) tended to be appalled by what they now regarded as a lack of analytic rigor and focus.8

Because the public money was being pumped into the system at the high end—into the large research universities—the effect of the Golden Age was to make the research professor the type of the professor generally. This is the phenomenon Christopher Jencks and David Riesman referred to, in 1968, as “the academic revolution”: for the first time in the history of American higher education, research, rather than teaching or service, defined the vocation of the professor—not just in the doctoral institutions, but all down the institutional ladder. And this strengthened the grip of the disciplines on scholarly and pedagogical practice. Distinctions among different types of institutions, so far as the professoriate was concerned, began to be sanded down. This is why when the system of higher education expanded between 1945 and 1975, it also became more uniform. The cold war homogenized the academic profession.

The cold war introduced another element into the philosophy of higher education as well. This was the principle of meritocracy. The great champion of that principle was the same man who helped set in place the new financial relationship between the university and the federal government, James Conant.9 Conant was only articulating a general postwar belief that opening educational opportunities to everyone, regardless of race or gender, was simply a better way to maximize the social talent pool. If your chief concern is to close a perceived “technology gap” (or to maintain technological superiority), you can’t get hung up on an irrelevance like family income or skin color. The National Defense Education Act of 1958 was fairly explicit on this point. “The security of the Nation requires the fullest development of the mental resources and technical skills of its young men and women…. We must increase our efforts to identify and educate more of the talent of our Nation. This requires programs that will give assurance that no students of ability will be denied an opportunity for higher education because of financial need.” Thus Conant was a leader in the establishment of standardized testing: he essentially created the SATs. He thought of the SATs as a culturally neutral method for matching aptitude up with educational opportunity.10

The meritocratic philosophy was accompanied by a new emphasis on the importance of general education—that is, curricula designed for all students, regardless of their choice of specialization. In practice, general education was mostly paid lip service to after the war; relatively few colleges actually created general education curricula, or required undergraduates to take specified extra-departmental courses of the kind Columbia College is famous for. But general education did get a great deal of lip service. The ideas most educators subscribed to was that the great works of the Western tradition are accessible to all students in more or less the same way; that those works constitute a more or less coherent body of thought, or, at least, a coherent debate; and that they can serve as a kind of benign cultural ideology in a nation wary of ideology. This is the argument of the famous study Conant sponsored at Harvard, General Education in a Free Society, published in 1945, the volume known as the Red Book. Conant himself thought that exposure to the great books could help the United States withstand the threat of what he actually referred to as the “Russian hordes.”

It seems obvious now that the dispensation put into place in the first two decades of the cold war was just waiting for the tiniest spark to blow sky-high. And the spark, when it came, wasn’t so tiny. The Vietnam War exposed almost every weakness in the system Conant and his generation of educational leaders had constructed, from the dangers inherent in the university’s financial dependence on the state, to the way its social role was linked to national security policy, to the degree of factitiousness in the value-neutral standard of research in fields outside the natural sciences.

And then, as the new populations began to arrive in numbers in American universities after 1970, the meritocratic rationale was exploded as well. For it turned out that cultural differences were not only not so easy to ignore as men like Conant had imagined; those differences suddenly began to seem a lot more interesting than the similarities. This trend was made irreversible by Justice Lewis Powell’s decision in Regents of the University of California v. Bakke, handed down in 1978. Powell changed the language of college admissions by decreeing that if admissions committees wanted to stay on the safe side of the Constitution, they had to stop talking about quotas and to begin talking about diversity instead.

Powell’s opinion blew a hole in meritocratic theory, because he pointed out what should have been obvious from the beginning, which is that college admissions, even at places like Harvard, have never been purely meritocratic. Colleges have always taken nonstandardized and nonstandardizable attributes into account when selecting a class, from musical prodigies to football stars, alumni legacies, and the offspring of local and national bigwigs. If you admitted only students who got top scores on the SATs, you would have a very boring class.11 “Diversity” is the very word Powell used in the Bakke opinion, and there are probably very few college catalogs in the country today in which the word “diversity,” or one of its cognates, does not appear.

As the homogeneity of the faculty and student body broke down during the period of diversification, the disciplines began their transformations. On the level of the liberal arts college, the changes were already in evidence by 1990, the year Ernest Boyer published his landmark study of them, Scholarship Reconsidered.12 The changes are visible today in a new emphasis on multiculturalism (meaning exposure to specifically ethnic perspectives and traditions) and on values (an emphasis on the ethical implications of knowledge); in a renewed interest in service (manifested in the emergence of internship and off-campus social service programs) and in the idea of community; in what is called “education for citizenship”; and in a revival of a Deweyite conception of teaching as a collaborative process of learning and inquiry.

The Golden Age vocabulary of “disinterestedness,” “objectivity,” “reason,” and “knowledge” and talk about things like “the scientific method,” the canon of great books, and “the fact-value distinction” have been replaced, in many fields, by talk about “interpretations” (rather than “facts”), “perspective” (rather than “objectivity”), and “understanding” (rather than “reason” or “analysis”). An emphasis on universalism and “greatness” has been replaced by an emphasis on diversity and difference; the scientistic norms which once prevailed in many of the “soft” disciplines are viewed with skepticism; “context” and “contingency” are continually emphasized; attention to “objects” has given way to attention to “representations.”

This trend is a backlash against the scientism, and the excessive respect for the traditional academic disciplines, of the Golden Age university. It can’t be attributed solely to demographic diversification, because most of the people one would name as its theorists—people such as Thomas Kuhn, Hayden White, Clifford Geertz, Richard Rorty, Paul De Man, and Stanley Fish—are white men who were working entirely within the traditions in which they had been trained in the 1950s and 1960s. In most cases, these scholars were simply giving the final analytic turn to work that had been going on for two decades. They were demonstrating the limits, in the humanities disciplines, of the notion of disinterested inquiry and “scientific advance.” The seeds of the undoing of the cold war disciplinary models were already present within the disciplines themselves. The artificiality of those Golden Age disciplinary formations is what made the implosion inevitable.

4.

One way to see the breakdown of consensus in the liberal arts disciplines is by looking at college catalogs. Compare, for example, the English departments at two otherwise quite similar schools, Amherst and Wellesley. English majors at Wellesley are required to take ten English department courses, eight of which must be in subjects other than creative writing. (Nor do basic writing courses count toward the major.) All English majors must take a core course, called Critical Interpretation; they must take one course on Shakespeare; and they must take at least two courses in literature written before 1900, one of which must be in literature written before 1800. With one exception, a course on “Medieval/Renaissance,” cross-listed courses—that is, interdisciplinary courses—are not counted toward the major. The course listing reflects attention to every traditional historical period in English and American literature.

Down the turnpike at Amherst, on the other hand, English majors have only to take ten courses “offered or approved by the department”—in other words, apparently, they may be courses in any department. Majors have no core requirement and no period requirements. They must simply take one lower- and one upper-level course, and they must declare, during their senior year, a “concentration,” consisting of three courses whose relatedness they must argue to the department. The catalog assures students that “the choices of courses and description of the area of concentration may be revised as late as the end of the add-drop period of a student’s last semester.” Course listings, as they are available on line, are not historically comprehensive, and many upper-level offerings are on topics like African (not African-American) writers.

At Amherst, in short, the English department has a highly permissive attitude toward its majors, and I’m sure if you asked why, the reason given would be that English should be understood more as an intellectual approach, a style of inquiry, a set of broad concerns than as a distinctive body of knowledge. At Wellesley, the department obviously has the opposite view. They see the field more concretely. Of course, the way a department chooses to represent itself in a catalog and what actually goes on in its classes are not necessarily identical. It’s likely that Amherst and Wellesley English majors end up learning many of the same things. But their notion of English as a field of study is probably very different.

Up the food chain at the graduate level, then, there is a problem. Does training to become an English professor entail familiarity with the history of English and American literature, or does it entail a more wide-ranging eclecticism, informed by a theoretical understanding of the essential arbitrariness of disciplinary boundaries? The closer liberal arts colleges move toward the Amherst model, the more unclear it becomes what “the study of English” means. On the other hand, the closer they stick to the older models, the more it may seem, to some people, that the liberal arts make a poor “preparation for life.”

Maybe the present state of uncertainty is not a portent of doom, though. Maybe it’s an opportunity. In most of what is written about higher education by people who are outside the academy, and even by some who are inside, there seems to be very lit-tle recognition that “higher education” today embraces a far more diverse set of institutions, missions, and constituencies than it did even thirty years ago. Many people still think of “college” as four years spent majoring in a liberal arts field, an experience only a minority of the people who attend college today actually have. The virtue of acknowledging this new dispensation is that it may encourage us to drop the one-size-fits-all manner of pronouncing on educational issues. Young people seek higher education for differ-ent reasons and have different needs, and different opportunities are held out to them. Institutions need to be more variously equipped to meet these needs.

The word “relevance” got very tiresome back in the 1960s, when it was used to complain about the divorce between academic studies and the “real world” of civil rights and Vietnam. But the truth is that the Golden Agers thought their work was relevant. They thought that the disinterested pursuit of knowledge, conceived as a set of relatively discrete specialties, was the best way to meet the needs of the larger society. There now seems to be a general recognition that the walls between the liberal arts disciplines were too high. Maybe it is also the case that the wall between the liberal arts and the subjects many people now go to colleges and universities to study—subjects such as business, medicine, technology, social service, education, and the law—are also too high. Maybe the liberal arts and these “non-liberal” fields have something to contribute to one another. The world has changed. It’s time to be relevant in a new way.13

This Issue

October 18, 2001