Medical Research on Humans: Making It Ethical

angell_1-120315.jpg
FPG/Hulton Archive/Getty Images
Tony, a seven-year-old boy with artificial limbs, July 1963. He was born without arms as a result of the drug thalidomide.

In the first part of this review, I discussed principles and codes of ethics concerning human experimentation, including the Nuremberg Code and the Declaration of Helsinki.1 But principles and codes are not the same as laws and regulations, even though they might inspire them. The first US statute dealing with the ethics of medical research on human subjects was enacted in 1962, as a reaction to the thalidomide tragedy of the late 1950s, in which pregnant women given thalidomide to alleviate morning sickness gave birth to infants with deformed or missing limbs.

Although the drug had not been approved for general use in the US, it was given as an experimental drug to many American women, and they were not told they were research subjects. The 1962 law mandated that human subjects be informed about the research and give consent. The National Institutes of Health also began to require that institutions that received NIH funding set up committees to review research on human subjects. In 1966, Henry Beecher, an anesthesiologist at the Massachusetts General Hospital, published an exposé of unethical studies that had appeared in medical journals.2 But there was not much general attention to the subject until the 1972 public revelation of the “Tuskegee Study of Untreated Syphilis in the Negro Male.” When this research made front-page news in The Washington Star and The New York Times, it had been ongoing for forty years—straddling the Nazi era.

The Tuskegee study was launched by the US Public Health Service (the parent body of the NIH) in 1932. In it, 399 poor African-American men with untreated syphilis were observed and compared with 201 men free of the disease to determine the natural history of syphilis. At the time the study began, syphilis was a major scourge. The only treatments were heavy metals, like arsenic or mercury, which were toxic and not very effective. The idea was to observe latent untreated syphilis, since there was a suspicion that men in the later stages of the disease might actually fare better without treatment. The men were not informed about their disease or the purpose of the study. They were told only that they would receive free examinations and medical care.

The lack of informed consent was not unusual in those days. (There was no informed consent in the 1948 streptomycin trial, discussed in Part 1 of this review, either.) What was worse was the fact that the study continued even after penicillin was found to be effective against syphilis in the 1940s. In fact, during World War II, these men were exempted from the draft to prevent their being treated with penicillin while in the military. Later, researchers justified continuing the study by saying there would…


This is exclusive content for subscribers only.
Get unlimited access to The New York Review for just $1 an issue!

View Offer

Continue reading this article, and thousands more from our archive, for the low introductory rate of just $1 an issue. Choose a Print, Digital, or All Access subscription.

If you are already a subscriber, please be sure you are logged in to your nybooks.com account. You may also need to link your website account to your subscription, which you can do here.