Until the 1990s American medical researchers performed most of their experiments on other Americans—frequently choosing subjects who were poor and vulnerable.1 Now, however, they are increasingly likely to conduct their investigations in third world countries on subjects who are even poorer and more vulnerable. Part of the reason is AIDS—the first modern infectious disease to strike the developed and developing world simultaneously and to give both a large stake in finding a cure. Part of the reason, too, is the mounting financial and regulatory burdens of research in the rich nations, which cause investigators, both from universities and drug companies, to go to the poorer countries to test new treatments.
Whatever the reason, practice has overwhelmed ethics. The major international codes on human experimentation, including the principles proclaimed at Nuremberg in 1947 and the World Medical Association’s Declaration of Helsinki in 1964, all say that the well-being of the subject always should take precedence over the needs of science or the interests of society, and that doctors must obtain “the subject’s freely informed consent.” But neither these codes nor the Western groups concerned with medical ethics have had the developing countries in mind. Countries in which clinical trials are now conducted are often too poor to pay for the medicines that are successfully tested. And the people recruited for those trials very seldom get the kind of medical care the participants in trials in prosperous countries can expect. Whether Western principles covering the treatment of people who are the subjects of research can and should be applied in Africa and Asia has become a bitterly debated question.
The question was first posed by the research that followed the 1994 finding that is known by its grant num-ber—076—in the Pediatric AIDS Clinical Trials Group, a consortium of university-based investigators funded by the National Institutes of Health (NIH). The purpose of the research, everyone agrees, was admirable: to learn how to prevent the transmission of HIV from HIV-positive pregnant women to their children. The dispute that arose concerned whether the research was conducted ethically.
In 076, American investigators proved conclusively, through clinical trials in the US, that giving AZT to HIV-positive pregnant women during their pregnancy and immediately before labor, and then to their newborn infants for six weeks, significantly reduced the rate of transmission of HIV. Without AZT, roughly one third of the women transmitted the virus to their newborn babies. With AZT, mothers passed on the virus only 8 percent of the time, for a total reduction of 66 percent. Clearly, AZT provided extensive protection against the spread of AIDS from mother to child.2
Even the 076 trial stirred some argument. AZT is a highly toxic drug, with many serious side effects, and investigators were administering it to pregnant women of whom only one third would have passed on the disease. Was it ethical to subject the fetuses of the other two thirds to a toxic drug, when, if left alone, they would not have suffered any…
This is exclusive content for subscribers only.
Get unlimited access to The New York Review for just $1 an issue!
Continue reading this article, and thousands more from our archive, for the low introductory rate of just $1 an issue. Choose a Print, Digital, or All Access subscription.