Until the 1990s American medical researchers performed most of their experiments on other Americans—frequently choosing subjects who were poor and vulnerable.1 Now, however, they are increasingly likely to conduct their investigations in third world countries on subjects who are even poorer and more vulnerable. Part of the reason is AIDS—the first modern infectious disease to strike the developed and developing world simultaneously and to give both a large stake in finding a cure. Part of the reason, too, is the mounting financial and regulatory burdens of research in the rich nations, which cause investigators, both from universities and drug companies, to go to the poorer countries to test new treatments.
Whatever the reason, practice has overwhelmed ethics. The major international codes on human experimentation, including the principles proclaimed at Nuremberg in 1947 and the World Medical Association’s Declaration of Helsinki in 1964, all say that the well-being of the subject always should take precedence over the needs of science or the interests of society, and that doctors must obtain “the subject’s freely informed consent.” But neither these codes nor the Western groups concerned with medical ethics have had the developing countries in mind. Countries in which clinical trials are now conducted are often too poor to pay for the medicines that are successfully tested. And the people recruited for those trials very seldom get the kind of medical care the participants in trials in prosperous countries can expect. Whether Western principles covering the treatment of people who are the subjects of research can and should be applied in Africa and Asia has become a bitterly debated question.
The question was first posed by the research that followed the 1994 finding that is known by its grant num-ber—076—in the Pediatric AIDS Clinical Trials Group, a consortium of university-based investigators funded by the National Institutes of Health (NIH). The purpose of the research, everyone agrees, was admirable: to learn how to prevent the transmission of HIV from HIV-positive pregnant women to their children. The dispute that arose concerned whether the research was conducted ethically.
In 076, American investigators proved conclusively, through clinical trials in the US, that giving AZT to HIV-positive pregnant women during their pregnancy and immediately before labor, and then to their newborn infants for six weeks, significantly reduced the rate of transmission of HIV. Without AZT, roughly one third of the women transmitted the virus to their newborn babies. With AZT, mothers passed on the virus only 8 percent of the time, for a total reduction of 66 percent. Clearly, AZT provided extensive protection against the spread of AIDS from mother to child.2
Even the 076 trial stirred some argument. AZT is a highly toxic drug, with many serious side effects, and investigators were administering it to pregnant women of whom only one third would have passed on the disease. Was it ethical to subject the fetuses of the other two thirds to a toxic drug, when, if left alone, they would not have suffered any adverse consequences?
This question had to be submitted to the institutional review boards (IRBs) at the researchers’ home institutions. By federal regulations, all human experiments supported with federal funds must first be approved by an IRB, and practically every university, hospital, or company doing such research has established one. The regulations spell out how an IRB should be organized (e.g., with no fewer than five members, with at least one not affiliated with the institution) and what standards it should enforce (research benefits must outweigh risks and investigators must give potential subjects enough information to insure informed consent).
But the final decision on what is or is not ethical research is left to the individual IRBs. There is no regular review of their decisions, and, despite some requests to create one, there is no national IRB to supercede them. In the AZT research on pregnant women, all the IRBs took the position that since no one could identify in advance which newborn would be spared the disease and which would contract it, it was ethical to subject all of them to the risk of toxic effects.
Giving AZT to HIV-positive pregnant women and newborn infants immediately became the standard of care in American hospitals. (Some doctors and public health officials even advocated compulsory HIV testing of pregnant women to ensure that their offspring were protected.) But this treatment stood little chance of being adopted in developing countries with mounting cases of AIDS. A six-month course of AZT costs about $800, far beyond the budgetary means of countries whose average annual expenditure per citizen for health care was below $25. Some American investigators, strongly suspecting that the virus was most likely to be passed during late pregnancy or childbirth, suggested that a short course of AZT might be almost as protective as the long course. Were this true, the cost of treatment would be markedly reduced and the benefits almost as great.
The clinical trials to test the efficacy of a short course of AZT required two groups, or arms as they are called. The active arm would receive the short course. But what would the second arm, the control group, receive? Should it get the full course of AZT that American women were receiving, or should it get a placebo? Almost all the researchers in the field—most of them in southern Africa and Thailand—decided to give the control groups a placebo. In February 1998, the result of the first trial was announced: the short course of AZT was effective, not to the degree of the full course but substantially more effective than the placebo. A small amount of AZT (at a cost of $50, as against $800 for the long course) reduced transmission by 40 to 50 percent, which was excellent news for countries like Thailand, which was able to afford the treatment. It was good news to African countries, which would have more difficulty paying for it but could hope to supplement their medical budgets with humanitarian aid.
But the positive findings did nothing to reduce the intensity of the debate over whether the control groups should have received some medical treatment. The basic issue was one of the ethical obligations to a control group facing deadly disease when an effective therapy existed. Since the efficacy of AZT against mother-to-infant transmission was fully established, why not give the control groups the long course of AZT and use this as the base against which to measure outcomes for the short course?
This was precisely the position adopted by Marcia Angell in a now famous New England Journal of Medicine editorial.3 Angell cited the Declaration of Helsinki provision that control groups should always receive the “best proven diagnostic and therapeutic method,” which in this case meant the long course of AZT. When researchers in southern Africa and Thailand gave control groups placebo, Angell wrote, they violated the Helsinki standards and demonstrated “a callous disregard of their welfare.” She then went on to compare the research to the Tuskegee study, the most notorious American research scandal, in which, from the 1930s through the 1960s, the US Public Health Service had purposely withheld known effective treatments from black men suffering from syphilis. Angell charged that investigators were now withholding effective treatments from black women and children in Africa suffering from AIDS. “It seems,” concluded Angell, “as if we have not come very far from Tuskegee after all. Those of us in the research community need to redouble our commitment to the highest ethical standards, no matter where the research is conducted.”
Her position was supported by Sidney Wolfe and Peter Lurie, the physicians who head the Health Research Group of Public Citizen, the organization founded by Ralph Nader.4 They calculated that as of 1997, sixteen research projects were investigating the effectiveness of short course AZT, using as subjects some 17,000 pregnant women in developing countries. In fifteen of the sixteen projects, nine of which were funded by the NIH or the Centers for Disease Control (CDC), the control groups did not receive AZT. (The one exception was a Harvard School of Public Health project in Thailand.)
Wolfe and Lurie could find no justification for allowing investigators to adopt lower standards abroad than they used in the US. “Researchers working in developing countries,” they wrote, “have an ethical responsibility to provide treatment that conforms to the standard of care in the sponsoring countries, when possible.” They conceded that if achieving that standard required exorbitant expenses, like building an intensive care unit, the requirement could be waived. But if the test involved a drug that the manufacturer could, and sometimes did, provide free of charge, then a different standard was truly a double standard, and this, they concluded, “creates an incentive to use as research subjects those with the least access to health care.”
The position of Angell, Wolfe, and Lurie provoked responses every bit as vigorous and uncompromising. The head of the NIH, Harold Varmus, and the head of the CDC, David Satcher, defended 076, as did Michael Merson, executive director of the WHO Global Program on AIDS.5 The long course of AZT, they said, was not only very expensive but required frequent medical monitoring that was beyond the capacity of developing countries. So giving AZT could in fact be compared to building an intensive care unit. They also argued that it might not be safe to use AZT in a population that was seriously undernourished and suffering from anemia, and that placebo trials were also quicker than others in getting an answer.
Since critics contested each of these points, defenders of the post-076 trials went on to insist that research ethics in developing countries should not be dictated by the United States. Local ethics committees, they claimed, were competent to review research projects, and since Africans and Asians had approved these trials, outsiders should not second-guess them. Varmus and Satcher quoted from a letter written by the chairman of the Uganda Cancer Institute research committee: “These are Ugandan studies conducted by Ugandan investigators on Ugandans. … It is not NIH conducting the studies in Uganda but Ugandans conducting their study on their people for the good of their people.”
One last contention was too political to be voiced openly but was often hinted at privately. No country wanted to spend significant amounts of money on second-class treatment. If a short course of AZT was openly compared to a long course, health officials would have to ask political leaders to fund a program that was less effective than the American one. But if results from the short course were compared to those from a placebo, they would be able to request funding to reduce by half the number of newborn babies infected by HIV.
Just how irreconcilable are the differences between the two camps becomes apparent in the provisions of the 1993 “International Ethical Guidelines for Biomedical Research involving Human Subjects.” Drafted by the Council for International Organizations of Medical Sciences and the WHO, the document attempts to formulate research ethics in developing countries with particular attention to combating AIDS. However, the document is ambivalent about the issues raised by the post-076 trials. First, it declares, “investigators must respect the ethical standards of their own countries.” They “risk harming their reputation by pursuing work that host countries find acceptable but their own countries find offensive.” But it then adds that investigators must respect “the cultural expectations of the societies in which research is undertaken” and ought not to “transgress the cultural values of the host country by uncritically conforming to the expectations of their own.” So should researchers conduct such placebo trials? The document does not say.
The mixed record of human experimentation in the US continues to be explored, with recent attention devoted to the government's secret radiation experiments during the cold war. See Eileen Welsom, The Plutonium Files (Dial Press, 1999), and Jonathan D. Marino, Undue Risk (Freeman, 1999; forthcoming in paperback).↩
E.M. Connor et al., "Reduction of Maternal-Infant Transmission of Human Immunodeficiency Virus Type I with Zidovudine Treatment," The New England Journal of Medicine, Vol. 337 (1994), pp. 1173-1180.↩
Marcia Angell, "The Ethics of Clinical Research in the Third World," NEJM, Vol. 337 (1997), pp. 847-849.↩
Peter Lurie and Sidney Wolfe, "Unethical Trials of Interventions to Reduce Perinatal Transmission of the Human Immunodeficiency Virus in Developing Countries," NEJM, Vol. 337 (1997), pp. 853-856.↩
Harold Varmus and David Satcher, "Ethical Complexities of Conducting Research in Developing Countries," NEJM, Vol. 337 (1997), pp. 1003-1005; and Vol. 338 (1998), pp. 836-844.↩
The mixed record of human experimentation in the US continues to be explored, with recent attention devoted to the government’s secret radiation experiments during the cold war. See Eileen Welsom, The Plutonium Files (Dial Press, 1999), and Jonathan D. Marino, Undue Risk (Freeman, 1999; forthcoming in paperback).↩
E.M. Connor et al., “Reduction of Maternal-Infant Transmission of Human Immunodeficiency Virus Type I with Zidovudine Treatment,” The New England Journal of Medicine, Vol. 337 (1994), pp. 1173-1180.↩
Marcia Angell, “The Ethics of Clinical Research in the Third World,” NEJM, Vol. 337 (1997), pp. 847-849.↩
Peter Lurie and Sidney Wolfe, “Unethical Trials of Interventions to Reduce Perinatal Transmission of the Human Immunodeficiency Virus in Developing Countries,” NEJM, Vol. 337 (1997), pp. 853-856.↩
Harold Varmus and David Satcher, “Ethical Complexities of Conducting Research in Developing Countries,” NEJM, Vol. 337 (1997), pp. 1003-1005; and Vol. 338 (1998), pp. 836-844.↩