Last spring more than 9,000 faculty members in American colleges and universities received a questionnaire called “The 1977 Survey of the American Professoriate.” This questionnaire, twenty pages long, consisted of 128 multiple choice questions. It had an impressive deep red cover stating that it was “directed” by Everett C. Ladd, Jr., of the University of Connecticut and Seymour Martin Lipset of Stanford. An accompanying letter from Lipset and Ladd informed those who received it that:
The primary reason for this faculty survey is to collect information useful to the formation of sound education policy. If intelligent responses are to be forthcoming, they must be based on an adequate reading of faculty preferences and the maximum input of faculty understanding. It is important that policy makers (inside and outside academe) know the views of the more than a half million men and women who, through their research activities and their training of over eight million college students each year, play so large a role in American educational and scientific life.
When I received the questionnaire, I read it and threw it away—a common reaction apparently, since about half of the questionnaires were not returned to the surveyors. After thinking further about the questionnaire, however, I became increasingly concerned because of its possible effects on American education, and decided to analyze it.
The questionnaire consists of ten sections including such matters as “Academic Standards,” “National Affairs,” and “The Norms of Science and Scholarship.” The covering letter did not say who sponsored the survey. I have since learned that it was paid for primarily by the Carnegie Foundation, the Spencer Foundation, and the National Science Foundation. More important, the findings have been getting much attention, and are being publicized as accurately representing the views of American professors. The Chronicle of Higher Education, which reaches some 67,000 academics and administrators, has recently been publishing a series of articles summarizing the results of the survey. I have seen articles on it in such different publications as Newsweek, The Manchester Guardian Weekly (“Why Campus Morale Has Plunged”), and Stanford University’s Campus Report (“Survey Shows Academics Support Private Business, Fear Big Government Growth”). Mr. Ladd himself stated in February 1978 that “policy makers take our material seriously,” and he pointed out that within the last two months he had received inquiries about his survey’s data from such groups as the National Science Foundation and the Sloan Commission on Higher Education.1
I regard the questionnaire as defective in at least two ways: first, it prejudices many questions to the point where respondents may reasonably object to dealing with the issues on the surveyors’ terms. Second, it ignores what might be called the Heisenberg Principle of the social sciences: that some things cannot be measured without being altered by the measuring process.
I’m not claiming of course that Lipset and Ladd have deliberately prejudiced the questions, only that many of their questions reflect ways of perceiving the issues that are repugnant to others. Lipset admits that he has tried to “simplify complex issues into a variety of simple items to which everyone responds.” But the prejudice I have in mind goes well beyond such simplification. The surveyors do not allow for people whose thinking is quite different from their own or from that of the group they used to “pretest” the questionnaire before it was sent out.
For reasons of space, I will limit myself to five questions from three categories: “Current Concerns,” “National Affairs,” and “The Norms of Science and Scholarship.”
Question 3. The statements below relate to teaching and student performance. Does each correctly reflect your personal judgment?
(1) Definitely yes
(2) Only partly
(3) Definitely no
a. The students with whom I have close contact are seriously underprepared in basic skills—such as those required for written and oral communication.
This statement does not connect with reality as I know it. In 1977, for example, there were about 900 students taking the first three terms of calculus at Yale. Of these, about 150 freshmen had trouble with ninth-grade algebra. On the other hand, about 180 freshmen were qualified to take third term (or second year) calculus, and they made up about two-thirds of the class. (The others were sophomores.) The simultaneous existence of a large group of badly prepared students with an even larger group of exceptionally well prepared students is a fascinating phenomenon, which cannot be taken into account by answering Question 3a as directed.
I have taught for twenty-seven years. During this period there has been a substantial improvement in the mathematical skills of a large group of students. Another large group still performs disastrously in, say, ninth-grade algebra. But twenty-seven years ago the performance of a comparable group would have been even worse. The emergence of a large number of well-prepared advanced students was the result of reforms dating back to the late 1950s. The same movement which gave rise to the “new math”—which I have partly criticized—also led to some improvements in the high school curriculum. While it would be useful to ask college professors their views on current high school training, and to gather concrete suggestions on how to improve it, this cannot be achieved by Question 3a, which lacks the necessary precision and is intellectually at the level of a TV panel. It can only mislead people, or be improperly interpreted, thus preventing the “formation of sound education policy.”
Question 3b. “Grade inflation” is a serious academic standards problem at my institution.
The question does not deserve serious consideration because an answer can be interpreted in several ways. Suppose that a person chooses answer (3), “Definitely no.” The respondent could mean that grade inflation is not a problem per se; that it is a problem per se, but not at the respondent’s institution; that it is a problem at the institution, but not a serious problem; that it is a serious problem but not at the respondent’s institution, etc.
Question 3c. American higher education should expand the core curriculum, to increase the number of basic courses required of all undergraduates.
This question poses a generalization so sweeping that it does not make sense. Universities differ; their functions differ; there are plausible reasons for having many choices among them. Who would want to answer a question like this one in an absolute way? What do the questioners mean by “American higher education”? The Harvard-Princeton-Yale circuit? The University of Michigan? Chicago? Ohio State? Kent State? Berkeley? UC Riverside? I know little about the basic courses required of undergraduates at institutions other than my own. (The same can be said for many of my colleagues.) How can I, lacking that knowledge, answer whether these should be increased or not?
Nowithstanding the vagueness of such questions, statistical conclusions based on them quickly find their way into the press, as in the October 24 Newsweek:
Ladd and Lipset will not finish their analysis until next year, but their poll already suggests one answer: More than three quarters of all professors think that colleges and universities should bring back the “core curriculum”—forcing all students to take more basic courses in a variety of fields.
Question 3d. A grading system which rigorously discriminates good student performance from bad contributes positively to student motivation.
Another sweeping generalization. Certain kinds of grading, under certain circumstances, do encourage learning; but I find that grading in itself, without other effective means of teaching to go along with it, does not necessarily have an effect one way or another. It may have effects both ways—negative and positive—depending on the personality of the professor, the subject matter, the relations between professors and students, the general atmosphere of the school.
Consider, for example, the spectacular case of the California Institute of Technology, where the faculty decided that the grading system contributed to an undesirable atmosphere. During the early 1960s, a large part of the freshman class at Caltech left after the first year; in 1963, the drop-out rate reached 23 percent. Apparently the students could not stand the exceptionally high pressure of the place. In 1965 the faculty set up the “pass-fail” system for freshmen that is still in force today. I know from first-hand experience that Caltech has one of the most intelligent and strongly motivated freshmen classes of any university in the country. None of the available answers can adequately represent my feelings concerning grading.
Question 3e. I find myself not grading as “hard,” not applying as high standards in assessing student work, as I believe I should.
Whichever way I answer the question—“definitely yes”; “only partly”; “definitely no”—I admit the question has meaning even though I believe this particular way of perceiving my relations to students and grading makes no sense. I resent such choices being imposed on me, and ultimately used and manipulated toward the formation of education policy which cannot be sound.
In Questions 3b, 3d, 3e the surveyors run up against the Heisenberg Principle. By harping on grading to the exclusion of other aspects of teaching, they contribute to giving grades more importance than I and many others think grades deserve. They emphasize a question of dubious journalistic interest; for one’s views on grading are sometimes interpreted as reflecting a “strict” versus a “permissive” frame of mind toward students. I do not wish to deal with such matters as grading on the surveyors’ terms.
Question 85. Please indicate whether you agree or disagree with each of the following statements on economic policy.
(1) Strongly agree
(2) Agree with reservations
(3) Disagree with reservations
(4) Strongly disagree
a. The private business system in the United States, for all its flaws, works better than any other system yet devised for advanced industrial societies.
Why, as stated in the cover letter, the surveyors seek our “judgments on a number of basic questions of national politics and policy” such as this one is left unclear. But this question and its possible answers are loaded with assumptions I cannot accept.
First I would point out that the “system” was not so much “devised” as it evolved through various phases and through various periods of history. At one time we had seemingly unbounded supplies of energy and natural resources; this situation has evidently changed. How an advanced industrial society like the US can adjust—both internally and relative to the outside world, e.g., OPEC—to using more expensive energy and dwindling resources is a question whose answer will vary according to the period we are concerned with. That the business system can hardly be called “private,” moreover, should be clear from the huge government contracts for defense and subsidies to railroads, agriculture, highway construction, energy research, etc.
The surveyors do not say with what other systems of “advanced industrial societies” they want me to compare the “private business system” of the US: Do they mean the systems of Western Europe (closely intertwined and similar)? Of Japan (similar)? Possibly that of the Soviet Union? If the latter, one fact that comes to my mind is that the US paid $45 billion this year for oil imports, while the USSR needs to make a grain deal with the US every year or so. Neither system seems to be working well, and both look as if they are heading toward a crisis.
b. There should be a top limit on incomes so that no one can earn very much more than others.
c. Economic growth, not redistribution, should be the primary objective of American economic policy.
d. The motivation to work hard, to achieve, must receive greater recognition and stimulation in the US.
e. The growth of government in the US now poses a threat to the freedom and opportunity for individual initiative of the citizenry.
These questions again use familiar political and economic rhetoric in the ways they pose alternatives. (Not necessarily right-wing or left-wing rhetoric—just familiar.) I find none of these choices satisfactory. I agree with those who find that the political and economic forces now interacting upon us require an entirely different way of thinking. Perhaps we do not wish to emphasize growth or redistribution so much as we wish to emphasize readjustments throughout the society to different ways of living and consuming. Certainly one of the primary goals of economic policy should be the transformation of the economy in order to use new sources of energy, which will require vast expenditures. To present only the alternatives of Question 85c (growth or redistribution), and no others, as “the primary objective of American economic policy” prejudices the issues to the point where I do not wish to deal with them on the surveyors’ terms.
As to Question 85e, I believe the growth of government in any country always poses a threat to individual initiative and individual freedom of the citizenry. I did not invent this point of view, nor did Thomas Jefferson. However, if I answer that I “strongly agree,” there are many ways of interpreting my answer, for example that I am against the government levying a heavy tax on cars above a certain size. I do not wish to be so interpreted, and consequently I don’t want to give surveyors data including answers to Question 85e.
On the other hand, I do not equate laws controlling gasoline consumption with laws controlling what people do with their private lives or inhibiting dissent. In that sense, the word “private” when applied to the “business system” has a meaning different from the one it has when we talk of “private life,” and the right to “privacy.”
Question 87. Do you personally approve or disapprove of the social practices or behavior listed below?
(1) Strongly approve
(2) Approve with reservations
(3) Disapprove with reservations
(4) Strongly disapprove
a. The use of marijuana
b. “Swinging” (the swapping of sexual partners by consenting married couples)
c. Excluding women from membership in certain social clubs
d. Pornographic motion pictures and magazines
e. Premarital sex
f. The use of such drugs as heroin and cocaine
g. Extramarital sexual relations in the absence of spouse’s consent
h. The level of violence prevailing in current television programming
I have not yet met anyone who “approves” of heroin, even with reservations, although some may have checked this answer just for fun. As for the “swinging” question, my answer is that I neither approve nor disapprove. What business is it of mine what others do in their private lives? The same answer goes for the use of marijuana, for premarital sex, or for pornographic motion pictures and magazines.
I find it obnoxious to be pigeonholed as a person who approves or disapproves such behavior. Here as elsewhere the questioners require us to approve or disapprove when we may not wish to do either. (In this, of course, they are typical of many other American sociologists.)
I fail to see the relevance of this question to the “formation of sound education policy.”
Question 89. For each of the following situations, what do you think the US should do?
(1) Take no military action
(2) Send military aid but not US personnel
(3) Send air support but not ground troops
(4) Send US troops if necessary
a. If the Soviet Union invaded West Germany
b. If the Soviet Union invaded Yugoslavia
c. If Rhodesia were subject to a massive invasion from the surrounding states
d. If Israel were attacked by Arab countries and threatened with defeat
How this question will help form sound education policy is beyond me. The question itself is posed coarsely. Most of us would regard the threats posed in a, b, and d as horrifying ones; but no one can rationally decide policy from such stark hypotheses. The Israelis, for example, have always insisted they wanted no US troops; here the respondent is asked to make the decision for them.
The Harvard mathematician Neal Koblitz has suggested a deeper flaw. The surveyors apparently assumed that, in the case of an invasion of Rhodesia, to choose options 2, 3, or 4 was to advocate US military support of the white minority regime there. No option is offered to support the black majority. But, as Koblitz argues, the invasions in a, b, and d would be bitterly resisted by most of the attacked inhabitants. In Rhodesia the black majority, some 96 percent of the population, might regard an outside intervention as a liberation and expect the US to help support it, just as the US has previously supported sanctions against the white regime. The respondent has no opportunity to express this point of view.
Question 115. The statements that follow deal with the attitudes and actions of scholars and scientists. Please indicate first (A) whether you and others in your discipline agree with the statements, and second (B) whether you and others in your discipline act in accord with them.
A. Agree/Disagree with this statement
B. Act in accord with this statement, (1) almost always, (2) sometimes, (3) rarely
The answers are split into two cases: “I myself”; “most others I know in my discipline.”
The question consists of twelve parts but all suffer from the obvious logical defect of referring to “most others I know in my discipline.” No matter what the subject of agreement or disagreement is, there may be some in the discipline who think one way, and others who think another way, while the distribution may not fit the qualification “most others.”
Question 115a. Scientists and scholars should prefer critical evaluation by competent peers to public acclaim.
First, I do not see why critical evaluation by peers is presented as in conflict with public acclaim. Criticism or praise from scholars, public acclaim or anonymity, can occur in any combination. Why should one be compared to the other, let alone preferred?
Second, it is none of my business whether a particular scientist or scholar likes public acclaim. Here again (as in the question on “swinging”) the questionnaire insists that we either approve or disapprove. What business of mine is it to tell others what they “should” do with their lives, including seeking public acclaim?
Question 115c. In general, scientists and scholars are unjustified in keeping their research findings secret.
To whom does this question refer? To someone doing pure science in a university? To a university scientist “consulting” for an industrial company? To a university scientist working part-time for the government? In connection with war work? With other work (e.g., energy)? No matter what interpretation a respondent may have, there is no way to tell what it is. He has merely been trapped into dealing with a catch phrase.
What, for example, does “secret” mean? Some scientists like to keep their results to themselves until the results have achieved a high degree of coherence. This may take months, perhaps years, but they may achieve an overall effect not obtainable by piecemeal publication. Gauss, a famous mathematician, followed this practice. Others like to talk about their results to all comers. Are the former keeping their findings “secret”?
The question allows me no opportunity to state my opinion, namely that what scientists or scholars choose to do with their research is, for the most part, none of my business; however when they do research that may have grave effects on society generally—e.g., on neutron bombs or nerve gas—I am concerned to know about it. But here as elsewhere I have no idea what the questioners are trying to measure, or why. The question is an exercise in mind reading.2
I cannot imagine how “sound education policy” could be made on the basis of this and the other questions I have analyzed.
Lipset’s reply to critics such as myself was reported in the journal Science of February 17, 1978:
Lipset refutes the criticisms of his survey by explaining that the critics do not really understand the aims or nature of his research. For example, they pick on what they see as ambiguities in the interpretations of specific questions. But, Lipset says, in survey research no one question is intended to reflect attitudes of respondents. He and Ladd are looking for patterns of responses to related questions. Thus they are not looking for what percentage of the respondents say grade inflation is a problem, but are looking at how answers to this and other questions on grading vary with institutions.
While Ladd and Lipset claim they are not looking for percentages, the published reports of their survey contradict them. In their own article in The Chronicle of Higher Education of January 16, 1978, Ladd and Lipset make some comparisons but most of their results are stated in percentages. For example:
…it may come as a surprise to businessmen who regard the campus as a bastion of hostility to the free-enterprise system that 81 percent of the faculty members in our survey agreed with the statement: “The private business system in the United States, for all its flaws, works better than any other system devised for advanced industrial society.”
A sizeable majority, 62 percent, did approve of “premarital sex.”
Over two-thirds, 69 percent, approved the view that “the growth of government in the US now poses a threat to the freedom and opportunity for individual initiative of the citizenry.”
A smaller majority, 54 percent, of faculty members endorsed the proposition that “economic growth, not redistribution, should be the primary objective of American economic policy.”
Neither Campus Report nor Newsweek mentioned that less than half of the questionnaires—some 4,400—were returned. But this fact is crucial to understanding why the data are invalid at best, dangerous at worst. Some who returned the questionnaire may have shared the kinds of assumptions and attitudes that characterize the questions themselves. Others may have replied without thinking much about the questions, such as Professor Roger Howe of Yale who wrote to Ladd and Lipset: “Although I irreflectively returned the questionnaire, I concur with the substance of [Lang’s] objections”; or Professor J.K. Goldhaber of the University of Maryland who wrote Lipset, “I regret having participated in your survey.”
For some surveys of opinion, a return by mail of 50 percent is considered successful. But far from providing a statistically representative sample of the opinions of the “American Professoriate,” this survey may simply distinguish between those who (perhaps without reflection) share the patterns of thinking of Mr. Lipset and Mr. Ladd and those who do not. I have had a number of letters from distinguished scholars who do not. But it would be as hard to measure all the evidence for my hypothesis as it is to measure whatever the questionnaire is trying to measure.
Whatever the specific “aims and nature” of the survey may have been, the professors were kept ignorant of them. Ladd and Lipset now say they were seeking “patterns of responses to related questions”—but which ones? Were they correlating views on “swinging,” politics, and “grading,” for example? Would not the attitudes of professors to the survey have been affected if they had known how it would be analyzed? Some sociologists claim that telling respondents about the correlations that are sought will put the respondents off—another illustration of the Heisenberg Principle.
I enclose your questionnaire unfilled out,” Professor John Tate of Harvard wrote Ladd last spring. “Please stop pestering me with it, unless you can explain to me just how such a questionnaire will help ‘formulate sound education policy.’… I am unable to see what earthly use such a survey could be to anyone. In fact, who does pay for it?”
Lipset replied that the survey was
paid for by a variety of agencies, principally the Carnegie Corporation, the Spencer Foundation, and the National Science Foundation. The…survey is in large part a “service” one, including sections of interest to N.S.F., a committee on international activities of the National Academy of Sciences, and the A.A.U.P. In addition, it includes some of our own concerns in the sociology and politics of scholarship and science.
This answer raised more questions than it answered. The vice president of the National Academy of Sciences, Professor Saunders MacLane, wrote me that the reference to the National Academy probably had to do with information for a board that is now defunct. As for the NSF, MacLane writes that its Division of International Programs paid to have certain questions included in the survey, presumably those on the foreign travel of scholars in part ten.
Those questions ask for specific information and make sense. I would no more object to them, taken by themselves, than I would, say, to a separate survey by the NSF posing specific questions to scientists about funding research grants. Addressed to a homogeneous group and intelligently phrased, such a survey could help in formulating sound policy.
The same cannot be said of the twenty-page questionnaire distributed by Ladd and Lipset whose “own concerns in the sociology and politics of scholarship” have been mingled, for example, with the NSF’s concerns about foreign travel. Were the NSF and the AAUP aware of the other questions in the survey, such as the ones I have analyzed, and did they specifically approve including their own questions in it? To what extent is the data concerning each part invalid because of the prejudice introduced by the other parts? The questions about politics and education and private morals may well have put off many from answering the questionnaire. We’ll never know how many; nor will we know how many of those who answered the questionnaire would now regret having done so, as Howe and Goldhaber have.
What is clear is the increasing evidence that some scholars have had reason to reject the questionnaire. As Barclay Kamb, chairman of the geology department of Caltech, recently wrote me, “I would consider the results obtained from such a questionnaire to be inappropriate as a basis for the formulation of academic policy. I see no value in it….”
S.E. Luria, the Nobel Prize-winning biologist and director of the MIT Center for Cancer Research, writes:
I have examined the questionnaire by Ladd and Lipset. I find it tendentious and objectionable for two main reasons. First, the structure of the questionnaire lends itself to questionable associations of political or social views with professional and educational practices. Second, the questionnaire can hardly be depended upon to protect the anonymity of the respondents (a computer program could easily extract the identity, at least for senior respondents).
It is also clear that an increasing number of social scientists question the value of the survey, some on the basis of the analysis published here and previously circulated in draft. Science reported the following comments in its issue of February 17:
“I would not support a questionnaire that closed off the choices the way Lipset and Ladd do.” [Charles Hamilton, professor of political science, Columbia]
Frank Riessman, a sociologist at the City University of New York and an editor of the journal Social Policy, says he is shocked by the way the survey questions are politically slanted.
[The sociologist] Marion J. Levy, Jr., of Princeton University…believes that Lang’s criticisms are “disturbing and far-reaching,” and that they apply to most survey research.
No valid inferences about “faculty preferences” can be drawn from “The 1977 Survey.” Too many of its questions are ambiguous or meaningless, or prejudice the issues, as I have described. Its data are invalidated, moreover, not simply by the failure of half the sample to return the questionnaire but by the fact that some of that half have had strong reasons for objecting to it that others may well share; while an indeterminate number of those who answered may change their minds, as did Howe and Goldhaber.
The Chronicle of Higher Education, in its January 23 issue, reported some of the recent protests against the survey, but took account of the detailed analysis I have made only in this comment: “The specific criticisms of the faculty survey, Mr. Ladd says, were the kind of criticisms that could be lodged against any kind of survey research.”
I do not wish to condemn all questionnaires. But if other questionnaires are subject to the same criticisms, that does not make this one any better.
May 18, 1978
See Newsweek, October 24, 1977, The Manchester Guardian Weekly, January 22, 1978, Campus Report, January 18, 1978, and Ladd’s statement in Science, February 17, 1978. ↩
The same comments apply to Question 115i: “Scientists and scholars should be willing to inform others investigating similar problems about their work in progress.” ↩