In response to:
ESP at Random from the July 14, 1977 issue
To the Editors:
Martin Gardner, writing in The New York Review of July 14, represents me as a severe critic of Harold Puthoff and Russell Targ’s research on teaching ESP by electronic feedback machines, and while apparently esteeming me more highly than Puthoff and Targ, ends up concluding that “until Tart repeats his tests under controlled conditions—adequate randomizing and rigid exclusion of all possible methods (there are others!) of secret coding between subject and sender—the staggering results reported in his book cannot be taken seriously, even by other parapsychologists.” I disagree with these three points and his conclusions.
First, my review (in my Learning to Use Extrasensory Perception, University of Chicago Press, 1976) of Puthoff and Targ’s studies was generally positive. My strongest criticism was of a possibility of (not evidence for) subject fraud on only one of their case studies, where just one subject was used, but I pointed out that their data did not indicate that a defect in the Aquarius four-choice testing machine had been utilized to inflate the scores. The Aquarius machine was not used in the mode that allowed that defect to operate in any of their other studies. Gardner also fails to report the fact that half of my own research involved using the Aquarius machine, and fifteen subjects made an overall score of 2,006 hits when 1,869 would be expected by chance. This would occur by chance alone only four in 10,000 times, and indicated good ESP results with that machine, under tightly controlled conditions.
Second, I believe the letter from my mathematician colleagues, Professors Goldman, Stein, and Weiner, which was written at an intermediate stage of our fruitful collaboration, was somewhat premature and wrongly gives the impression that my results can be readily “explained away.” This is not the place for a long, technical discussion, but the basic concern is whether some of the targets could have been predicted by the subjects by using mathematical inference and knowledge of previous targets, rather than ESP. A good card player can make better-than-chance guesses at what cards are still out from keeping track of what’s been played, and that is the kind of question we were investigating. A very powerful computer test of this possibility has now been completed by another colleague, Eugene Dronek of the University of California at Berkeley, and I, and we have found that such mathematical prediction cannot account for the bulk of my results: even if the subjects were trying to predict this way, there is still an enormous amount of ESP. To update the “dirty test tube” analogy, the effects of the contaminants have been estimated and found to be minor.
Third, Gardner misrepresents me in saying, “Tart also recognizes in his book (page 164) a clever method by which sender and receiver could have cheated…” (my italics). I pointed out how a sender could have unintentionally and unknowingly cued a subject; but I found no evidence that it happened: I have no evidence that my experimenters or subjects cheated. Gardner takes a position that I find morally repellent as well as scientifically invalid, namely that if a critic can think of any way a subject and/or an experimenter could have cheated to get apparent ESP results, regardless of whether there is any evidence of cheating, then the experimental results need not be taken seriously.
In science, an explanation must be capable of disproof as well as proof. There is no experiment, however, in any field of science, that cannot be faked. Gardner’s criticism demonstrates that he does not accept the discipline of scientific method. While he is certainly entitled to defend his personal belief system by any means he wishes, I hope The New York Review readers will not mistake it for science. The possibility of reliably training people to use ESP that my work raises is too important a question to be dealt with by implication and misrepresentation. The appropriate scientific response is for other researchers to carry out similar work which may confirm, disconfirm, or modify my findings.
Charles T. Tart
University of California
Martin Gardner replies:
I will comment on each of the three points:
1) Tart’s criticism of the ESP teaching machine experiments by Puthoff and Targ (pages 26-31 of Tart’s book) are, in my judgment, “severe.” The first subject of the pilot study was a child, the second a scientist. The child showed a mild increase in ESP ability, the scientist a remarkable increase. Tart adds: “Unfortunately, this subject, a scientist, recorded his own data, and the first subject’s data were reported by his father. Since it is a general rule in parapsychological research never to allow subjects any opportunity to make recording errors or to cheat, these results must be considered tentative” (p. 27, Tart’s italics). Tart is politely accusing P and T of violating kindergarten canons of experimental design.
Tart then summarizes the three phases of the NASA-funded experiment that followed the pilot study. For phase I, “The total number of hits for the group as a whole was almost exactly what one would expect by chance.” However, one subject scored high. (He was Duane Elgin, a self-proclaimed psychic and friend of P and T who was then a “futurologist” at Stanford Research Institute.) Tart accepts this as genuine ESP, balanced by “significant ESP-missing” (unusually low scores) on the part of others. P and T were convinced that the overall poor results were caused by the clatter of their machine’s data printer.
For phase II, P and T chucked the printer and for the first time in their experiment all scores were automatically recorded by a silent computer. As I have pointed out elsewhere, this eliminated possible sources of bias in phase I. The results showed no deviation from chance either in number of hits or learning curve slopes. In brief, the only adequately controlled phase of the experiment showed no sign of ESP.
For phase III, P and T relaxed controls, detached the computer, and went back to primitive hand recording. Of the eight subjects, only Elgin redeemed himself. Tart summarizes the entire project as follows: “Most of their subjects showed no ESP, and of those who did, few were able to hold up in further studies.”
P and T used a four-choice trainer called the Aquarius Model 100. Tart used the same machine in his early studies. However, he reports that his son discovered a way to cheat on the machine when it was in its precognitive mode (the subject guessing targets before they are selected), and during one of Tart’s experiments the machine “broke down and began repeating one target with a very high frequency.” Although Tart reports positive results with this machine, he completed his work with a ten-choice trainer of his own invention, and which he clearly considers a vast improvement over the Aquarius model. It was with his own machine that Tart obtained the most sensational ESP results ever reported by a parapsychologist.
2) I completely agree with Tart that the defect in his machine’s randomizer is insufficient to account for his results of “A million billion billions to one” against chance. Nowhere have I suggested otherwise.
3) Tart accuses me of failing to understand scientific method. Because any result can be faked, he says, there is no reason to dismiss a psi experiment merely because a loophole in the design allowed cheating. Unless one can prove that a subject cheated, Tart finds it “morally repugnant” to criticize the results.
When I read those statements I could hardly believe my eyes. Nowhere in his letter does Tart indicate an awareness of the enormous qualitative difference between testing psychics for paranormal powers, and experimentation in all other branches of science. Blood cells, DNA molecules, gerbils, and photons don’t cheat. Because of the long, sorry record (going back to ancient times) of constant cheating by self-annointed psychics, the very essence of sound experimental design in parapsychology is to close all cheating loopholes. Until they are closed, no experiment indicating sensational psi powers is worth publishing. Tart himself (in the passage quoted in my first main paragraph) takes P and T to task on just such grounds.
When subjects are objects or creatures that can’t cheat, the only possibility of fraud is on the part of an experimenter. This does sometimes occur in all branches of science, usually with disasterous results. We have recently had several sad instances: the faking of mice specimens by a respected doctor at Sloane-Kettering, for example, and the scandal involving the director of J.B. Rhine’s laboratory who was caught altering records. Such cases are uncommon. But cheating by self-styled psychics is not uncommon. That is why extraordinary safeguards are required in psi research that are not required in other fields.
Let me adopt Tart’s technique and repeat the sentence he quotes from me, but with a different word italicized: “Tart also recognizes in his book (page 164) a clever method by which sender and receiver could have cheated….” It was this method that I described because it is one that not many parapsychologists know. I have no idea whether it was used, or whether an isolated sender—in his enthusiasm for telepathically guiding the subject’s hand to the target card—jumped up and down, thus transmitting a floor vibration to the subject across the hall who could use it, consciously or otherwise, as a cue.
Whether such methods (there are still others!) were used or not is beside the point. The point is that Tart’s experimental design, because it permitted such easy ways to cheat, was incredibly poor—so poor, in fact, that it was premature for Tart to write a book about it, and uncharacteristically bad judgment by the University of Chicago Press to publish it.
It would be helpful now if Tart would disclose more of his raw data. For instance, given a subject who made exceptional scores, were those scores obtained when the same person acted as sender? If so, the talented pair should be tested by better controlled replications in someone else’s laboratory. Were videotapes made of any of the high scoring runs? If so, a careful study of the tapes would confirm or deny the time-delay code I described. If no videotapes were made, that is another design defect because they would have provided invaluable data. It also would be helpful if in all further testing Tart hired a knowledgeable and skeptical magician to observe the actual experiments.
Tart’s reply to my note betrays a whopping misconception about the nature of the controls that are mandatory in the testing of alleged psi powers. Nevertheless, Tart’s understanding of experimental design impresses me as a cut above that of most of his colleagues.