angell_1-120315.jpg

FPG/Hulton Archive/Getty Images

Tony, a seven-year-old boy with artificial limbs, July 1963. He was born without arms as a result of the drug thalidomide.

In the first part of this review, I discussed principles and codes of ethics concerning human experimentation, including the Nuremberg Code and the Declaration of Helsinki.1 But principles and codes are not the same as laws and regulations, even though they might inspire them. The first US statute dealing with the ethics of medical research on human subjects was enacted in 1962, as a reaction to the thalidomide tragedy of the late 1950s, in which pregnant women given thalidomide to alleviate morning sickness gave birth to infants with deformed or missing limbs.

Although the drug had not been approved for general use in the US, it was given as an experimental drug to many American women, and they were not told they were research subjects. The 1962 law mandated that human subjects be informed about the research and give consent. The National Institutes of Health also began to require that institutions that received NIH funding set up committees to review research on human subjects. In 1966, Henry Beecher, an anesthesiologist at the Massachusetts General Hospital, published an exposé of unethical studies that had appeared in medical journals.2 But there was not much general attention to the subject until the 1972 public revelation of the “Tuskegee Study of Untreated Syphilis in the Negro Male.” When this research made front-page news in The Washington Star and The New York Times, it had been ongoing for forty years—straddling the Nazi era.

The Tuskegee study was launched by the US Public Health Service (the parent body of the NIH) in 1932. In it, 399 poor African-American men with untreated syphilis were observed and compared with 201 men free of the disease to determine the natural history of syphilis. At the time the study began, syphilis was a major scourge. The only treatments were heavy metals, like arsenic or mercury, which were toxic and not very effective. The idea was to observe latent untreated syphilis, since there was a suspicion that men in the later stages of the disease might actually fare better without treatment. The men were not informed about their disease or the purpose of the study. They were told only that they would receive free examinations and medical care.

The lack of informed consent was not unusual in those days. (There was no informed consent in the 1948 streptomycin trial, discussed in Part 1 of this review, either.) What was worse was the fact that the study continued even after penicillin was found to be effective against syphilis in the 1940s. In fact, during World War II, these men were exempted from the draft to prevent their being treated with penicillin while in the military. Later, researchers justified continuing the study by saying there would never be another chance to observe untreated syphilis. When the details of the study were revealed, there was widespread outrage, and the Nixon administration halted the study.

Suddenly, there was action. Congress passed the National Research Act of 1974, and regulations were issued that mandated the establishment of ethics committees, now called institutional review boards (IRBs), to review federally funded human research. The law also established a National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research to develop overarching principles. Its report—called the Belmont Report, named for the conference center where the commission began its deliberations—was issued in 1978. As of 2014, fifteen federal departments and agencies have adopted a common set of regulations based on the Belmont Report to govern research on human subjects. Known as the Common Rule (it appears as Subpart A of Title 45, Part 46 of the Code of Federal Regulations), it applies to virtually all federally funded research on human subjects, and most institutions follow it even for privately funded research. (Subparts B, C, and D apply to especially vulnerable groups—pregnant women, children, and prisoners.)

Look at how far we’ve come from the Nuremberg Code. The very existence of umbrella federal regulations, along with the rise of IRBs, demonstrates how much the locus of responsibility has shifted. Originally, responsibility was placed exclusively on the two parties directly involved—human subjects and researchers. According to the Nuremberg Code, subjects had absolute freedom either to consent or refuse to participate. And researchers had total responsibility for seeing that the research was conducted ethically. Now responsibility lies primarily with government and IRBs. Subjects have lost some of their freedom, since the requirement for informed consent is now conditional and can even be waived altogether. And researchers must abide by government regulations and IRB decisions. Whether this shift is a net gain is arguable, but it is certainly a major change.

For any given proposal, IRBs have the option of permitting the research to go forward, stopping it, or requiring revision. There is no mechanism for appealing their decisions. So what do we know about how these enormously powerful committees make their decisions? Next to nothing, according to Robert Klitzman in his book The Ethics Police? The Struggle to Make Human Research Safe. “It is remarkable,” he writes,

Advertisement

that the question of how IRBs themselves actually work, make decisions, and view and understand these quandaries has received relatively little attention. Only a few studies of IRBs have been published, and these have focused on procedural and logistical issues.

He starts with an overview of the bare facts. There are now about four thousand IRBs in the US, most at nonprofit research institutions, mainly academic medical centers (which consist of medical schools and teaching hospitals), and they usually meet monthly. But private, for-profit IRBs have also sprung up, which for a price review studies for pharmaceutical companies or other sponsors. And some academic IRBs have also started to charge industry sponsors for evaluating protocols—around $2,500 for initial reviews, and $500 for continuing reviews. Most boards review hundreds of studies a year, and many large medical centers have five or six IRBs. The Common Rule requires that IRBs have at least five members, at least one of whom is otherwise unaffiliated with the institution; and, in the words of the Common Rule, at least one “whose primary concerns are in scientific areas and at least one member whose primary concerns are in nonscientific areas.” Chairs are paid about a fifth of their academic or hospital salaries, and administrators are paid full-time.

After giving us the overview, Klitzman sets out to lift the curtain on the actual workings of these committees through extensive interviews with forty-six IRB members—twenty-eight chairs or cochairs, seven other members, ten administrators, and one IRB director—drawn from sixty randomly selected academic and nonprofit research institutions, thirty-four of which chose to participate. The results are sobering. The people interviewed are generally earnest and well-meaning, but they admit that they have almost no basis for their ethical decisions. In Klitzman’s words, “Remarkably, though PIs [principal investigators], to conduct studies, must regularly undergo testing about research ethics, no such requirements exist for IRB chairs, members or staff,” and “IRB members may not only be ‘self-taught’ in ethics, but use ‘gut feelings’ and the ‘sniff test,’ not careful ‘ethical analysis.’” Adding to the confusion, many multicenter studies require the approval of more than one IRB. One of the members interviewed confessed, “On a whole host of issues, we have absolutely no guidance, which contributes to very heterogeneous reactions of IRBs handling exactly the same studies.”

The underlying problem here is the failure of the Common Rule to provide substantive ethical guidance. Instead it is almost entirely concerned with structure and process, such as the composition of IRBs, documentation (which is required to be absurdly comprehensive and detailed), and assurance of compliance. One IRB member, speaking of his institution’s consent form, told Klitzman, “They have a 15-page consent form that is just silly in readability. You can tell the lawyers have been all over the form.” But as to the ethical basis for decisions, in addition to informed consent, these are the only requirements: “Risks to subjects are minimized,” and “Risks to subjects are reasonable in relation to anticipated benefits, if any, to subjects, and the importance of the knowledge that may reasonably be expected to result.” There is hardly any explanation of these almost offhand and vague requirements. The Common Rule also provides for waiving the requirement for informed consent if “the research could not practicably be carried out without the waiver or alteration,” but says little about what that means. In short, IRBs are flying by the seat of their pants.

Here is what is not discussed in the Common Rule: Should a distinction be made between healthy volunteers and patients suffering from the medical condition under study? The latter often believe, despite what they may have been told, that researchers’ primary aim is to treat them, not to study them. This misunderstanding is termed the “therapeutic misconception,” and it is certainly understandable, especially since many researchers are also physicians. The patients naturally believe that physicians will treat them according to their best judgment and alter the treatment based on whether it seems to be working. Should it be more strongly emphasized that they will be treated as a group, according to an unvarying protocol? Should patients in clinical trials be required to have an additional physician who has no connection with the trial?

In addition, the Common Rule says almost nothing about the special problems of research in developing countries, including whether it is ethical to use placebos in control groups instead of a known effective treatment, and whether consent can be truly voluntary in regions with autocratic governments that benefit from the money and prestige that come from hosting research sponsored by developed countries.3 One IRB member, speaking about the use of placebos in control groups, told Klitzman, “The solution is to do things elsewhere that wouldn’t be considered ethical in this country. The FDA has no problem accepting those data from abroad.” He went on to say, “The favorite place to do these now is either South America or Eastern Europe.” Perhaps researchers should not even conduct research in developing countries unless the medical condition under study, like some tropical diseases, only occurs in these countries.

Advertisement

Another question that the Common Rule glosses over: Should the scientific importance of the research be considered in IRB decisions? Many studies of drugs similar to ones already on the market (called me-too drugs), and many post-marketing studies (additional research to try to find some edge over competitors) offer almost no benefit to society, as I will make clear later. As one IRB member told Klitzman, “Some kind of data is collected—usually very naturalistic data. But the true purpose of the study is to prime the pump—get the drug into clinical use—to assist with marketing” (Klitzman’s italics). Another member said:

Bad science is bad ethics. A bad study that’s not going to tell you anything—even if it doesn’t expose people to risk, but only inconvenience, and takes time, just to facilitate marketing—doesn’t make any sense to me.

angell_2-120315.png

In addition to the lack of ethical guidance, a serious problem faced by IRBs is their inherent conflict of interest, since they are usually established and work within the institutions where the research is conducted. Clinical research grants are a huge revenue stream for large medical centers. They cover much more than the direct costs of conducting the research, and they pay part of faculty researchers’ salaries. According to Klitzman, “IRB members may hesitate to question a study too much because the industry sponsors might then simply ‘pull out’ and take the funding to another institution, thereby impeding their colleagues’ careers.” IRBs, he says, “vary as to whether they see their primary clients as investigators, subjects, funders, or the institution”—if true, a damning fact. One IRB member told him, “Hospitals are told that they need IRB approval or won’t get funded, and the hospital puts pressure on the IRB to approve it,” sometimes by replacing the chair with someone less critical. When IRBs charge fees to review research proposals, the conflict is even starker. Another member said the fee

used to be collected as a discretionary fund, used as IRBs chose. Now, it’s been expropriated by senior officials and absorbed as part of the whole school revenue stream, which has affected our turnaround time.

(I bet it was faster.) For-profit IRBs are just as conflicted, because their clients have a financial interest in the research.

Individual IRB members may also have conflicts of interest. The Common Rule states:

No IRB may have a member participate in the IRB’s initial or continuing review of any project in which the member has a conflicting interest, except to provide information requested by the IRB.

But that provision seems to be honored in the breach. Klitzman writes that

despite the potential threats to integrity, 36 percent of IRB members have financial relationships with industry; 23 percent of those with a COI [conflict of interest] had never disclosed it to an IRB official; and 19.4 percent always nonetheless voted on the protocol.

Klitzman seems to like the people he interviewed; he approves of what they are trying to do, and wants to present them fairly. But he is also aware of the inconsistencies and confusion their comments reveal, and he clearly understands the implications. In his last two chapters, he offers some suggestions for reform, many of which make sense. But he does not call for an overhaul of the system, and I believe that is what is needed.

First, the Common Rule itself needs to be revised, because it is almost devoid of ethical content. Instead, it should deal with difficult substantive issues. These include the “therapeutic misconception” (patients’ belief, as described earlier, that the researchers are there to provide them with individual care); the higher likelihood of harms than benefits because experimental treatments are usually no better, and often worse, than current treatments; the trade-off between individual benefits and benefits to science and society; whether the scientific merit of the research should be given weight (as called for in the Nuremberg Code); and whether to limit the move to conduct clinical trials in developing countries where there is almost certainly less oversight. None of these issues have easy answers, but the regulations simply don’t address them seriously.

Second, since human subjects are drawn from the public, IRBs should represent the public. Currently, IRBs are the creatures of the research institutions whose work they are evaluating or of private companies hired by research sponsors or institutions conducting the research. They thus have every incentive to approve research projects. In academic institutions, for example, prolific faculty researchers with large research grants are a major source of revenue, and IRBs are naturally reluctant to oppose these academic stars. The solution is for IRBs to be regional public entities, entirely independent of research institutions and private sponsors. They could be established by the Department of Health and Human Services or by another governmental body, but it is essential that the protection of human subjects be seen as a public obligation, and that IRBs represent the public directly, not researchers or industry.

Third, the informed consent process should be changed to include evidence that the information is relevant and understood. It would be a simple matter to film the conversation between the researcher and the prospective subject (or his or her legal proxy). There would be two parts to the video: first, the researcher would provide the essential information, and second, the prospective human subject would be asked to repeat his or her understanding of what was just said.

A signature on a written document could also be obtained, but the real quality of the consent—that is, whether it was truly informed and comprehended (which is not the same)—would be on video. As it now stands, the term “informed consent” is virtually meaningless. In fact, the word “consent” has become a transitive verb, as in “consent the patient,” which means get his or her signature on a legal document. Often the job is not considered a high priority for the researcher, but left to junior staff. That should change; informing prospective human subjects should be a two-way conversation involving the researchers themselves.

Fourth, research using human subjects should have a serious scientific purpose, as stipulated in the Nuremberg Code. Currently, many clinical trials are simply a means to sell prescription drugs of little or no medical value. Some background: before a prescription drug may be sold in the US, the drug company must sponsor clinical trials to show the FDA that the drug is reasonably safe and effective. But the new drug needn’t be any better than drugs already on the market to treat the same condition; in fact, in most cases, it only needs to be better than a placebo. Once the FDA approves the drug, no other company may sell the same drug for the same use during the life of its relevant patent.

But most drugs, according to the FDA itself, are probably no better than other drugs already on the market, and most are not even new drugs at all, but old ones modified just enough to get a new patent (me-too drugs). There are expanding classes of me-too drugs, such as statins to lower cholesterol (starting with Mevacor in 1987 and moving on to Zocor, Lipitor, Pravachol, and others), or SSRIs to treat depression (starting with Prozac in 1987 and moving on to Paxil, Zoloft, Celexa, and others). There is little evidence that one is better than others in the same class, since they are rarely tested head to head at equivalent doses. Me-too drugs are sometimes made by the same company when the first one is nearing the end of its patent life (AstraZeneca replaced its best-selling Prilosec with the virtually identical Nexium). Sometimes an old drug is tested in clinical trials to treat a slightly different but related condition, so that the company can get a new patent and extend its market that way; for example, Eli Lilly marketed Prozac as Sarafem—same dose but higher price—for premenstrual symptoms.4

The relevant point here is that all of these commercial maneuvers required the use of human subjects for the clinical trials necessary to get FDA approval. People who agree to become research subjects because they believe they are contributing to important scientific knowledge would, I suspect, be disillusioned if they realized that they are contributing mainly to drug companies’ bottom line. Moreover, trivial research is a huge diversion of resources—including human subjects—from important research on the causes, mechanisms, and treatment of disease.

Fifth, the Common Rule permits research of “minimal risk” to be reviewed quickly by only one or two members of the IRB, and that makes sense. But it should better define “minimal risk.” When prospective human subjects have the medical condition under study, then they risk being denied the best treatment. That can be a big risk, and in general, research on sick people should almost never be considered minimal risk. In contrast, a great deal of research on healthy volunteers is of minimal risk. But the decision as to what is minimal should not be left to the researchers, who have an obvious conflict of interest. Social science research presents quite different risks, if any, from medical research. Much of it is essentially risk-free, and requires very quick approval. Some of it, however, has significant social or psychological risks, particularly in developing countries, and should be reviewed. It might be useful to create separate IRBs or standing IRB subcommittees to review social science research.

Despite all the problems and abuses, research on human subjects is absolutely necessary to advance our understanding of disease, and to prevent and treat illness better. People who volunteer to participate in such research perform a service for which the rest of us should be grateful. We must make every effort to protect human subjects from harm, and also to protect their autonomy and dignity. In addition, we should not misuse them in research that has no serious purpose. In one sense, human subjects can be considered an immensely valuable public resource not to be squandered. More important, they are human—sometimes our friends and neighbors—and setting up a system to protect them in every possible way is the right thing to do.

—This is the second of two articles.