In response to:

The Inferiority Complex from the October 22, 1981 issue

To the Editors:

Everyone will acknowledge that the heritability of intelligence and the reliability of I.Q. tests raise difficult empirical questions and that the issue is complicated by an unhappy history of prejudice, bad science and, sometimes, outright fraud. But R.C. Lewontin thinks it’s worse than that. In his review of Stephen Jay Gould’s The Mismeasure of Man [NYR, October 22] he adds the surprising claim that anyone who thinks you could measure intelligence or examine its etiology is guilty of a conceptual error:

…there is the conceptual error. Intelligence, acquisitiveness, moral rectitude are not things, but mental constructs, historically and culturally contingent. The attempt to find their physical site in the brain and to measure them is like an attempt to map Valhalla. It is pure reification, the conversion of abstract ideas into things. While there may be genes for the shape of our heads, there cannot be any for the shape of our ideas.

As philosophers we warm to the prospect of squelching a raucous scientific controversy from the comfort of our armchairs. But all the conceptual errors here seem to be Lewontin’s.

Lewontin never makes clear what “reification” is or why it’s a bad thing. His example is poorly chosen. The man who wants to map Valhalla is guilty of a factual, not a conceptual, error; he’s mistaken in thinking the place exists. The emphasis on “things” suggests that we are being warned off the error of thinking that intelligence is a physical object…like a rock or a kidney. That would be a conceptual error, but not one that anyone has ever been guilty of. Intelligence is (if anything) a property of things (people). But Broca, the I.Q. testers, and everyone else knew that. The question is what is wrong in principle with thinking that this property might be measured, inherited, or correlated with special features of the brain?

At one point Lewontin complains that the only evidence for the adequacy of I.Q. tests is that their results agree with one another. But this just ignores the fact that the tests rank people in an order which corresponds to what we independently judge to be their relative intelligence. Lewontin grudgingly acknowledges this but talks as if this only made the whole business suspect:

In order for the original Stanford-Binet test to have won credibility as an intelligence test, it necessarily had to order children in conformity with the a priori judgment of psychologists and teachers about what they thought intelligence consisted of. No one will use an “intelligence” test that gives highest marks to those children everyone “knows” to be stupid. During the construction of the tests, questions that were poorly correlated with others were dropped since they clearly did not measure “intelligence,” until a maximally consistent set was found.

But this (minus all those sneer quotes) sounds exactly the right procedure for developing a valid test for intelligence. We accept the patch and sputum tests for tuberculosis because their results agree with each other and with physicians’ “a priori” diagnoses of tuberculosis. Would Lewontin say of these tests:

The claim that something real is measured…is a classic case of reification. It is rather like claiming, as proof of the existence of God, that he is mentioned in all the books of the Bible.

If Lewontin had said that the psychologists and teachers involved couldn’t tell the difference between smart and stupid people in the first place, he would have a substantive (but unsubstantiated) criticism. But this would be the charge of bad judgment, not of “conceptual error.”

Lewontin seems to regard the “abstruse” statistical methods used in drawing up I.Q. tests as dangerously highfalutin, but says the real problem

…is not in the arithmetic, but in the supposition that, having gone through the mathematical process, one has produced a real object or at least a number that characterizes one.

But this talk about “real objects” is unhelpful. I.Q. tests give us numbers which correlate with and predict some people’s rankings of subjects by intelligence. Where is the mistake? Well…

As Gould points out, the price of gasoline is well correlated with the distance of the earth from Halley’s Comet, at least in recent years, but that does not mean that some numerical combination of the two values measures something real that is their common cause.

If Lewontin is admitting here that there is a correlation between I.Q. and intelligence, but suggesting that it’s only an accidental correlation, he is conceding a good deal. An intelligence test that works only by accident is still a test of intelligence that works. So long as the comet’s distance is reliably correlated with gasoline prices we can use one figure to determine the other. What makes this sound odd, of course, is the idea that the correlation between I.Q. and (psychologists’ assessments of) intelligence could be just a coincidence. That is unlikely and that is just why we suspect a common cause. In the same way the fact that psychics can’t agree on the color of people’s auras is evidence that there is no such thing as an “aura”; while the fact that independent oenologists tend to coincide in their rankings of vintages is evidence that their judgments have a real object.

Advertisement

Mixed up in this are Lewontin’s cavils about the “historically and culturally contingent” nature of our judgments of intelligence. The thought here is, of course, that what gets counted as “intelligence” will vary from culture to culture and from time to time, in the way that, say, judgments of physical beauty vary historically and culturally. Whether judgments of intelligence are relative is an interesting empirical question, not one that can be settled by armchair theorizing or literary anecdotage. But supposing they are, so what? Lewontin seems to think that this shows that what the tests measure (and maybe intelligence itself) is “unreal.” Non sequitur, but then Lewontin isn’t the only one to think that as soon as “cultural relativity” rears its ugly head, science goes out the window. This is a mistake, as the following fable may help to show.

Suppose that we undertook to produce “Handsome Tests.” Funded by grants from the NSF and Elizabeth Taylor we set about finding a set of physical measures which distinguish good looking men from the rest of us. We begin with a large sample of men and ask women to rank them by looks. We take careful measurements of the men’s physical attributes, apply “abstruse statistics” and come up with the h factor—a complex ratio of nose-length to shoe size—that tests out well in predicting who will be regarded as better looking than whom. Success in finding a reliable Handsome Quotient would depend upon and be evidence for there being some real feature in common among men judged to be handsome. Determining H.Q.’s would be useful in all sorts of ways, e.g., we could use it as a tool for resolving the nature-nurture question with respect to handsome (“tighter shoes will help but you’re stuck with that nose”). But now suppose we notice that the women who produced our original target ranking were all Americans and that their judgments are wildly at odds with those of Russian women. Obviously we can no longer claim that h measures handsome tout court; we’ll have to say that it tests for what counts as handsome among American women. But three things are worth noting here. First, despite the relativity we haven’t stopped measuring something real. The difference between being good looking and not—if only to Americans—is nothing to sneeze at; being good looking to the women around you confers a biological advantage. Second, the relativity doesn’t mean that our test is useless for a general science of man. The test would be an important first step in producing a general theory of the cultural determinates of such judgments. Third, and most important: however much verdicts on handsome vary from culture to culture, it can still be the case that these judgments turn on characteristics that are biologically determined. Though assessed differently by different cultures, physiognomy is the primary determinate of good looks and—plastic surgery aside—physiognomy is largely a matter of heredity. Even an “historically and culturally contingent” quality can be “shaped by our genes.”

Terry Tomkow
Robert M. Martin

Dalhousie University, Halifax, Nova Scotia

To the Editors:

In his recent review of Steven Jay Gould’s new book Richard Lewontin makes the claim that present-day scientists are “imprisoned” by “the atomistic system of Cartesian explanation that characterizes all of our natural science.” I wish to point out that this statement represents an oversimplified view of a complex situation.

We are told that scientists are imprisoned by their reductionism and that reductionism has failed to solve many important problems in “natural science.” Indeed it has, but Lewontin should realize that failed attempts are not the same as failed research strategies. For years there were failed reductionist attempts at explaining the detailed structure of the atom yet understanding was eventually achieved via reductionist means. Similarly, failed attempts at explaining the general patterns of animal development (such as Spemann’s organizers) do not necessarily invalidate the reductionist effort as such.

Classical reductionism is the belief that the properties of a system at one level are wholly explainable in terms of the properties of the components present at a “lower” level. The whole is merely the sum of its parts. Yet this mode of scientific analysis may be quite rare in its pure form. The physicist trying to determine the equilibrium behavior of gas molecules cares not a bit (at least now) for the “strangeness” of the subatomic particles within. Nor do I as a geneticist always attempt to explain genetic processes simply in terms of the “basic” components of the system. For example, in my work on the genetic basis of sex ratio variation in an insect, my collaborator and I have identified various genetic and environmental determinants of sex ratio differences. The careful reader will note that the previous statement contains a bit of reductionism: the distinction between “genotype” and “environment.” But one must start somewhere. Lewontin would have us believe that most scientists stop thinking after decomposing the system into parts. No only is this not true (see below) but it is wrong to think that such a decomposition is incompatible with a recognition that a system has an interactive nature to it. My collaborator and I have recognized that certain environments change the expression of sex ratio genes in unique ways. We recognize a simple interaction. This sort of analysis is not uncommon because decomposing the system may be the only way of determining why the whole is more than the sum of its parts. Modern natural science is more pluralistic in its methods than Lewontin’s statement would imply.

Advertisement

Of course, it might be asserted that the type of scientific research strategy described above has not been used in the past when great scientific (say, biological) discoveries were made. Were Avery, Beadle, Darwin, Morgan, Pasteur, or Wright imprisoned by their reductionism? A partial answer to this question can be found in a recent and fascinating scientific biography of Thomas Hunt Morgan. Morgan is the person who provided, along with his coworkers, much of the foundations of modern genetics. Surely he would be a reductionist. Yet Garland Allen, the author of this biography, argues that Morgan1

…recognized the importance of studying complex processes initially by breaking them down into their component parts, but he did not believe that every biological problem could find its only satisfactory explanation in purely physical or chemical terms. Physics and chemistry were helpful in understanding biological problems, but an organism was something more than a “bag of molecules.”

Allen even explicitly characterizes Morgan as a “dialectical materialist.” This is an arguable point but it is clear that Morgan was quite able to make great discoveries using his approach to the study of biological phenomena. His approach is common.

Of course, tremendous effort will be required to solve the many important biological problems remaining. A few new facts will not allow us to “understand” the brain, for example. Nevertheless, there is no reason to believe that the varied methods of modern natural science will not allow us to eventually achieve such an understanding.

It is then a straw man that Lewontin creates when he writes of a reductionism which imprisons.2 Indeed, his “Cartesianism” is a model of scientific inquiry of which it has been said there are no cases.3

Steven Orzack

Museum of Comparative Zoology

Harvard University, Cambridge, Mass.

R. C Lewontin replies:

I take it as a severe criticism of my ability to write that two professors of philosophy can have so misread my explication of Gould’s book. I will try to make amends by explaining the matter again.

The height of a person is a natural attribute of a real object. If I average the heights of ten people, that average is not an attribute of any real object. There is no person with such a height, nor does it characterize the height of the collection of individuals since a collection of people does not have a height. The average is not even a height. It is simply the sum of a lot of measurements divided by the number of measurements. It is a mental construction. To assert that it is a real attribute of a real thing is an act of reification (indeed, double reification!). Again, if I multiply a person’s height by the number of letters in her name and divide that by the zip code of her residence, I will get an index that may do quite a good job of picking her out from a crowd. It is not, however, the characterization of a physical attribute, and to claim that it was would be an act of reification. Finally, I may ask a lot of children questions about language, geometrical patterns, numbers, and social attitudes; then construct the matrix of population correlations among these different sets of questions, rotate the axes of the matrix, and find its principal eigenvector, call the vector “g,” project an individual’s scores onto the principal eigenvector, and come up with a single characterization of that individual.

That abstruse calculation may do a moderately successful job of picking out children whom people believe to be “intelligent” (or handsome) and may even be of a little value in predicting who will make more money as an adult (parents’ income and occupation are much better predictors), but that does not make g a natural attribute. To claim that g is a natural attribute is to reify a mental construct. When Spearman and Burt went from constructing the value g to asserting that it measured a physical property, intelligence, that could be “fluid” or “crystalized” and that was a form of energy, they were not making a factual error, but a conceptual one. As to Valhalla, I would rather not get embroiled in the historical issue of whether it was a false hypothesis or a pure mental construct. If we read “New Jerusalem” for “Valhalla,” the point is made without ambiguity.

The second misunderstanding is the question of the correlation between IQ scores and so-called “intelligence.” Certainly IQ scores accord with a priori judgments about who is intelligent. That is because the Stanford-Binet test was cut and fit until it picked out such people. The issue that Gould and I were addressing was whether the agreement in the results of different IQ tests and parts of IQ tests could be taken as evidence that they were measuring something real. That agreement is not evidence because tests are not independent of each other but are adjusted to agree with the Stanford-Binet. Tomkow and Martin have thoroughly muddled “intelligence” with notions about intelligence. IQ tests do pick out people whom teachers and psychologists think are intelligent. Unfortunately, that fact has confused even our philosophers into thinking that the tests pick out people who have a physical, heritable, internal property, “intelligence,” that stands apart from socially determined mental constructs. That confusion is enshrined in E.G. Boring’s famous definition of intelligence as what IQ tests measure. The Catholic Church has a very elaborate, exacting, and successful test procedure, including the attestation of miracles, for finding out people whom its members regard as being “saintly.” But saintliness remains a mental construct, just like intelligence. It is not simply our “judgments of intelligence” but the very idea of intelligence that is a historically contingent mental construct.

It is important to point out that the distinction between mental constructs and natural attributes is more than a philosophical quibble, even when those constructs are based on physical measurements. Averages are not inherited; they are not subject to natural selection; they are not physical causes of any events. There are no “genes for handsomeness” or “genes for intelligence” any more than there are “genes for saintliness.” To assert that there are such genes is a conceptual, not a factual, error and one that has major consequences for scientific practice and social analysis.

Orzack’s point, about the failure of Cartesian analysis, comes down to a difference of opinion. It is, of course, dangerous to claim that the brain and the embryo will never be understood using our present concepts. It is the great irony of molecular biology that, inspired by Schrodinger’s “What Is Life,” it began with the belief that the ordinary laws of physics would not suffice to explain biological phenomena, and ended up with a description of basic hereditary processes that looks for all the world like a Ford assembly plant. Nevertheless, it is very unlikely that we are waiting for just a few new facts or experimental techniques to crack the problem of the central nervous system. Questions about the brain combine direct physical properties with metaphysical constructs that we cannot seem to avoid. It is a very different thing to ask “What are genes made of?” than to ask “What is the anatomical and molecular basis of thinking?” The first is well within the framework of Cartesian analysis while the second has that nasty word “thinking” in it. The problem is to bridge the gap between substance and thought, to do in conscious language what our brains do by their very nature.

This Issue

February 4, 1982