Major steps in scientific progress are sometimes followed closely by outbursts of foolishness. New discoveries have a way of exciting the imagination of the well-meaning and misguided, who see theoretical potentialities in new knowledge that may prove impossible to attain. On occasion, the seemingly imminent is later shown to be far further off than originally thought, yet still possible to achieve. More frequently, the apparent prospect is revealed to be the result of unrealistic hypotheses based more on wishful thinking than on fact. In no branch of human thought have erroneous leaps of this kind been more prevalent than in that peculiar mix of science and art that goes by the name of medicine.

Aesculapius, the ancient Greek god of healing, named one of his daughters Panacea, meaning “remedy for all diseases,” as if in recognition of man’s eternal quest for a universal method that would cure all of his afflictions. Though the search for such cures was long ago recognized as being futile, there have always been zealous advocates who promote particular medicines or regimens claimed to be so broadly effective that they approach the old concept of a cure-all. In the Middle Ages and Renaissance, for example, the preferred means for combating the wide variety of the poisonings so prevalent in those times was a prescription called theriaca, a mixture of some sixty primarily botanic in-gredients that came to be applied in many situations where the blood was thought to be contaminated. Nowadays we look to the promise of stem cells and genetic engineering to provide the kinds of universal cures that have always been sought.

Psychiatry is a discipline—if we can call it that—that is particularly susceptible to such fantasies. The mechanisms of mental illness are poorly understood and the modes of treatment are uncertain, both in their predictability and in their scientific basis. Two recently published books deal with the propensity to seek methods of cure that are allegedly applicable to a spectrum of psychiatric problems that may in fact be unrelated to one another, whether in their causes or in the details of their symptoms. These books, by Andrew Scull and Jack El-Hai, deal also with two particularly dangerous and foolish attempts to cure mental illness. That those responsible for them were highly motivated, highly skilled, and high-minded in their intentions, at least at the beginning of their work, did not in any way make their theories or themselves less foolish or dangerous.

In both cases, initial enthusiasm and seeming success gave way to obsessive determination to persist long after their proposed panacea had been proven to be far less promising than supposed, and even fraught with perilous consequences. And in both cases, a kind of madness supervened, leading to either professional or personal ruin and devastating effects on the lives of thousands of the patients they were so fiercely determined to rescue. Instead of the scientific immortality they pursued and expected, Henry Cotton and Walter Freeman are today remembered as frightening figures in the sometimes bizarre history of attempts to treat mental illness.


Scull’s book, Madhouse: A Tragic Tale of Megalomania and Modern Medicine, has to do with an old concept brought back to life by a new discovery. When, after the convincing work of Pasteur, Lister, Koch, and others, the germ theory was finally accepted by the international medical community in the late nineteenth century, the original skepticism that greeted its appearance was replaced by an unchecked enthusiasm for its implications. In time, it came to be recognized that not only were the germs themselves responsible for so many of the diseases whose etiologies were until then unknown, but that some of them were capable of producing toxins that were equally noxious. Since at least the time of the Egyptians, who believed that the putrefied effluvia of feces—its fumes and odors—were absorbed into the body, causing all manner of symptoms and overt sicknesses, the notion of internal contamination had been one of the staples of some schools of medical thinking. With the turn-of-the-twentieth-century acceptance that bacteria and their toxic products could cause so much havoc in the body, that ancient notion took on a legitimacy never previously associated with it.

These concepts came into medicine at a time when biological determinism was the prevailing theory of insanity. The mad were thought by most institutional psychiatrists to be hereditarily defective or to have undergone some kind of irreversible mental regression. Dr. Henry Cotton, the subject of Andrew Scull’s book, challenged these ideas. Trained at Johns Hopkins University by some of America’s foremost academic psychiatrists, he had spent a year in Munich during 1906 studying brain pathology under such luminaries as Emil Kraepelin and Alois Alzheimer. He then came under the tutelage of Adolph Meyer, later to be the chairman of the department of med-icine at Hopkins and a noted researcher on mental disease, many of whose pupils went on to distinguished careers. Meyer favored the collection of detailed observations and the accumulation of large amounts of data as the key to understanding mental sickness; he hoped by such methods to put its study on a scientific basis and elucidate its biological causation.


Like all of Meyer’s disciples, Cotton considered himself part of the new wave of his specialty, a scientist who would bring the treatment of the insane closer to the treatment of general diseases by seeking remediable organic causes in the laboratory. Cotton and his fellow young activists rejected the Freud-based concepts of vaguely comprehended psychodynamic processes; even more unacceptable to them was the resigned attitude of so many of their elders, who believed in heredity and the custodial care of the insane.

After a rapid rise in the hierarchy of institutional psychiatry, the thirty-one-year-old Cotton was named superintendent of the New Jersey State Hospital in Trenton in 1907. In keeping with his thesis that mental illness was responsive to appropriate treatment, he soon did away with every restraining device with which inmates had traditionally been shackled. If modern medicine was to be introduced into the treatment of the insane, it would begin with humane measures. His tenure began on a high note.

Though less than twenty years had passed since the germ theory was reluctantly accepted by American physicians, they were beginning to embrace it enthusiastically in the first decades of the twentieth century, eagerly seeking explanations for one symptom or disease after another by finding sites at which microorganisms were clustered, or from which they migrated elsewhere. In Great Britain too, bacteria were being indicted to explain a multitude of problems, most particularly a condition called autointoxication, said to result in difficulties as various as headaches, rheumatoid arthritis, and diabetes. One of England’s leading surgeons, William Arbuthnot Lane, was made a baronet in 1916 for his introduction of the largely imaginary anatomical bowel abnormality called Lane’s kinks, requiring operative release or excision of long segments of the colon because of stalled stools and the alleged consequent absorption of impurities into the bloodstream. This was the era in which the high colonic enema became as popular in Britain and America as jogging is today. It followed naturally that the putative effects of autointoxication would be blamed for disorders of mood.

It was as though medicine at this time was on a blissful honeymoon with the germ theory, sometimes indulging in fanciful notions discarded centuries earlier. The time for foolishness had come. Gradually, Cotton became intrigued by the writings of colleagues in America and England who asserted—with no evidence—that collections of germs and pus were responsible not only for ordinary neurotic behavior but also for major madness. The seemingly scientific notion of “focal sepsis” gave Cotton an excellent opportunity to use physical examination, laboratory studies, and X-rays in the same manner that general physicians did in deciding on therapy. Before long, he had convinced the New Jersey authorities to build extensive diagnostic facilities and operating rooms at his institution, in keeping with his master plan, which was to find and extirpate every discoverable locus of infection in each of the many psychiatric patients for whose care he was responsible.

He began his crusade with tooth extractions and then proceeded to tonsillectomies, appendectomies, and cutting out parts of the uterine cervix in women. Finally, taking a page from Arbuthnot Lane’s book, he began to excise ever greater lengths of large intestine, arguing that the bowel demonstrably swarms with bacteria. Cotton was undeterred by the post-operative deaths of about one third of his patients because he felt this rate of mortality was justified by cure rates, which he claimed to be in the range of 85 percent. Though more than a few other psychiatrists supported his theories and some acted on them, Cotton was by far the most vigorous and outspoken in publicizing statistics that, he insisted, validated his methods.

Cotton’s activities became the subject of considerable curiosity and then scrutiny, and in time the hospital trustees asked Meyer, then considered one of America’s leading psychiatric authorities, to look into them. He appointed a young psychiatrist whom he had trained, Dr. Phyllis Greenacre, to subject Cotton’s methods and results to a scrupulous study, which revealed not only that the treatment was useless but that the procedures used to carry it out had resulted in the maiming, disfigurement, and even death of a considerable number of people, some of whom had been operated upon against their will. Though Meyer suppressed the study, probably with the intention of protecting a man for whom he felt professional and personal responsibility (and who, in the presence of mounting criticism, was by then showing evidence of mental breakdown himself), the outcome was the downfall of Cotton. He died at fifty-six of a coronary attack, after he had lost his professional reputation.


Scull tells Cotton’s story skillfully, with considerable suspense and emphasis on physical appearance and personality. In youth, Cotton is “jaunty-looking, dark-haired”; later, as his troubles accumulate, he degenerates into “a heavy-set, white-haired, middle-aged man.” His antagonist, Phyllis Greenacre, is tall and attractive, though so beset with her own insecurities that she becomes “vulnerable and frightened” during her training period, “wrestling with her own psychological demons and steadily losing the battle.” The man she marries and later divorces is a brilliant researcher, and a “handsome former high-school athlete from Colorado.” The illustrious Meyer is “a forbidding figure, an authoritarian Swiss German with rigid European ideas of hierarchy, obsessed with detail.”

Despite his modest stature, even his appearance was intimidating: his piercing eyes, his sharp features, his pointed black beard, his Old-World mannerisms, and the Teutonic accent.

A reader cannot be blamed for wondering why Scull finds it necessary to embellish what is in fact a fine piece of scholarship with language that can be more than a trifle annoying and a distraction from his narrative. A book whose first paragraph contains a series of forced alliterations has a lot to make up for after it subjects the reader to such phrases as “mad and mopish”; “the distracted and the deranged”; “pity and punishment”; “fear and fascination”; “ranting and raving”; “distress and disturbance”; and “menace and misery.” Still, when Scull settles down to tell his story, he puts such devices aside and turns to writing a fascinating and well-documented history.

But it is history just a bit out of context. Though Scull limits himself to telling Cotton’s sad story, the fact is that far wider tendencies than his own presumptions combined to bring about his downfall. Even as he was starting his notorious campaign against “focal sepsis” at the New Jersey State Hospital, that conception of disease had already begun to crumble. During the following two decades evidence against autointoxication and its consequences piled up in studies that were radiologic, bacterial, biochemical, clinical, and microscopic. As vigorous as Cotton was in championing the theoretical basis of his radical therapies, there had never been any sustainable evidence of its validity; it was discredited by the very laboratory studies on which he had claimed it was based. By the time of Cotton’s death in 1933, the theory had been successfully debunked, and the scientific world went back to viewing the notion of autointoxication as just so much foolishness.


Foolishness, hubris, and the search for a panacea are also the subject of Jack El-Hai’s The Lobotomist, whose subject is described correctly in its subtitle: A Maverick Medical Genius and His Tragic Quest to Rid the World of Mental Illness. Cotton’s professional qualifications were high, but those of Walter Freeman were remarkable. Whether measured by training, academic attainments, national stature, or clinical ability, Freeman’s prominence was such that he stood in the first rank not only of his specialty of neurology but of the entire medical profession in America and the world. But there was in his character an egotism, an indifference to the opinions of others, and a willingness to skirt ethical boundaries. These qualities turned his eagerness to be a benefactor of mankind into a sort of obsessive mission that went far beyond the limits of reason.

Freeman’s foolishness was far more dangerous than Cotton’s in that it influenced an entire generation of psychiatrists, neurologists, and neurosurgeons and devastated the lives of tens of thousands of patients and their families. Instead of the important work he once seemed capable of doing, he is today remembered precisely as El-Hai describes him: “Aside from the Nazi doctor Josef Mengele, Walter Freeman ranks as the most scorned physician of the twentieth century.” He was, after all, the man whose name is associated with the prefrontal lobotomy.

Freeman’s obsession with the possibilities inherent in lobotomy began with the work of several neurophysiologists in the early 1930s whose experiments indicated that the brain’s frontal lobes were the source of human emotional states. Based on this laboratory evidence, the Portuguese neurologist Egas Moniz hypothesized that disrupting their neural connections to other parts of the brain might relieve the obsessive and harmful behavior that characterizes so much mental illness. On March 3, 1936, Moniz operated on a hospitalized patient by cutting small holes in the top of his skull directly over the frontal lobes, and then inserting a specially designed instrument by which he removed thin slices of the brain tissue between those areas and the rest of the organ. Four months later, he presented a paper before the Academy of Medicine in Paris, claiming that 35 percent of the twenty patients thus far subjected to the operation had been cured and another 35 percent helped. It was in this paper that he introduced the word “psychosurgery.”

Having been deeply impressed by Moniz when they met at a previous international congress, Freeman soon began a correspondence with the older man, in which he said he would recommend an American trial of the new procedure. By this time, Freeman had established his reputation as one of the leaders of American neurology. In addition to his medical degree, he had earned a Ph.D. in the subject, was professor and chairman of the neurology department at the medical school of George Washington University, a founder of the American Board of Psychiatry and Neurology, and the author of a leading textbook in a field in which he was an acknowledged expert, Neuropathology: The Anatomic Foundation of Nervous Disease. At the age of forty, he was at the top of his profession.

Like Henry Cotton, Freeman was a convinced organicist, certain that only a biological approach would serve any purpose in elucidating and treating mental disease. He had no use for psychological explanations or for the forms of psychological therapy then used. Furthermore, he was at a point in his career where, in spite of great academic success, he was feeling increasingly frustrated with his inability to leave some lasting mark on his field. The prefrontal lobotomy would enable him to take on both challenges at once: to make a unique contribution to neuropsychiatry and to ensure him a permanent place among the great contributors to medical science. “Here was something tangible,” he would later write, “something that an organicist like myself could understand and appreciate. A vision of the future unfolded.”

Enlisting the aid of James Watts, a talented young neurosurgeon who would be his associate in research and the further development of the procedure, Freeman planned his first operation and carried it out in September 1935 in the operating room of the George Washington University Hospital. Through holes drilled in the skull over the two frontal lobes, Freeman and Watts inserted the instrument that Moniz called a leucotome and cut out six cores of tissue on each side, severing connections to other parts of the brain.

The patient, a sixty-three-year-old woman with the diagnosis of severe agitated depression, made a good recovery and showed sufficient early improvement for the two innovators to publish a case report less than two months after the operation. By the time she died five years later with most of her symptoms still relieved, hundreds of other patients had undergone either lobotomy or similar operations. Over the next four decades, between 40,000 and 50,000 patients in America alone would undergo the procedure or one of its variations, with nearly 3,500 having been performed either by Freeman and Watts together or by Freeman alone. The roster of North American lobotomists and their academic connections reads like the list of an all-star team of neuroscientists, including such prominent experts as Walter Dandy of Johns Hopkins, W. Jason Mixter of Harvard, Lawrence Pool of Columbia, and Wilder Penfield of McGill.

El-Hai’s description of the events following the first lobotomy is, like Scull’s narrative of Cotton’s career, written with such clarity and engaging detail that a reader has difficulty in putting it down. One after another modification of the operating technique was introduced by Freeman, even after Watts had broken away from him, disillusioned both by the outcomes of some of the operations and by Freeman’s headlong pursuit of fame and disregard for ethical standards. Eventually Freeman developed the technique of passing a modified ice pick under the eyelid and pushing it upward through the thin bony plate over the socket, then sweeping it across the brain tissue in order to sever the connections between the frontal lobes and the thalamus, the structure he believed to be sending inappropriate signals to the frontal lobes. Such a procedure was not only quick but could be done in a physician’s office or an outpatient clinic.

Freeman disregarded mounting evidence that the operation was not as successful as originally thought, and furthermore was leaving many patients docile, unmotivated, and indifferent to their surroundings (as the Swedish neurosurgeon Gösta Rylander put it in 1947, they “suffered the amputation of their souls”), if not even more debilitated. He became not only its prime advocate but its prime publicist as well. From having, in the words of El-Hai, “promoted lobotomy only as a treatment of last resort,” he went on to declare, “Lobotomy, instead of being the last resort in therapy, is often the starting point in effective therapy.” And he claimed this even though he knew, after a decade of proselytizing, that lobotomies benefited only about a third of patients and left many others worse than they had been before the surgery.

Ego and self-delusion were only part of the explanation for Freeman’s folly. Along with his zealous pursuit of lasting fame he had a genuine desire to do good. El-Hai convincingly describes his mixture of motives in a brief passage:

Freeman coveted the role as the bearer of a new, safe, and more widely available psychosurgical procedure. He wanted to be the primary instrument of change, the person who brought the surgical treatment of psychiatric illness to the masses of people in such desperate need.

In the summer of 1946, barely seven months after introducing his ice pick surgery, Freeman embarked for years on what amounted to a tour of mental hospitals, lobotomizing one patient after another with his ice pick while seeming to ignore his own recently stated view that he could benefit only a third of his patients. “Transorbital lobotomy is a simple, effective method of treatment,” he wrote. “It offers the hope of returning a relatively high percentage of ‘incurable’ psychotics to their communities.” The chapter in which El-Hai describes Freeman’s travels from institution to institution is appropriately entitled “Road Warrior.”

In time, lobotomy was discredited by its own increasing reputation as a procedure of limited therapeutic value with much potential to reduce its beneficiaries to a zombie-like state. At the same time its chief advocate was increasingly seen as a wild-eyed zealot wielding his ice pick on headhunting expeditions to state hospitals, even as his academic activities and affiliations decreased in number and quality.

But the decisive reason for the operation’s decline was made clear in March 1954, when the antipsychotic drug Thorazine was approved by the FDA, the first of many such agents that were to follow in the coming years. There seemed fewer and fewer occasions to consider lobotomy when effective psychopharmacology was available. Neurosurgeons have continued their attempts to develop far better forms of psychosurgery, using new understanding of brain science and recently introduced methods that locate particular brain functions; almost three hundred such operations of various kinds are now done worldwide yearly. But the day of Freeman’s lobotomy is long since over, and his reputation has only worsened in the years since his death in 1972. His ignominy is such that his name is not to be found among the thousands listed in the twenty-three-volume American National Biography compiled by the American Council of Learned Societies. The biography of Henry Cotton, a far lesser man, occupies a full page. It is written by Andrew Scull.

But Freeman was never the monster he is nowadays reputed to have been, and he was certainly no Mengele. Ego and ambition blinded him to reality, and hubris destroyed him along with his operation. He shared the folly of the panacea-seekers who convince themselves of their superior understanding of new knowledge and conclude that their innovations are unassailable. His career, like Cotton’s be-fore him, can serve as a cautionary tale about those who come to believe that they are beyond the restraints of judicious professional behavior. As Molière put it in his Les Femmes savantes, “A learned fool is more foolish than an ignorant one.”

This Issue

August 11, 2005