In response to:
Health Care: Who Knows 'Best'? from the February 11, 2010 issue
Health Care: Who Knows 'Best'? from the February 11, 2010 issue
To the Editors:
In his cautionary essay, Jerome Groopman writes about the dangers of governments and regulators taking a prescriptive approach to medical practice based on “best practice” guidelines [“Health Care: Who Knows ‘Best’?,” NYR, February 11]. In support of his skepticism about their value, he gives examples of guidelines that have been overturned after accumulating evidence indicated practices were being recommended that were at best useless. Among others, these included recommendations that ambulatory diabetics should have their blood glucose very tightly controlled for the sake of their cardiovascular health, chronic renal failure patients on dialysis should take statin drugs to reduce their vascular event rate, patients with pneumonia must be treated with antibiotics within four hours, and anemic cancer patients should be treated with erythropoietin.
But critically, what Dr. Groopman fails to mention is that none of these recommendations was supported by high-quality evidence even when it was written. None was supported by a large randomized trial showing improvements in real clinical outcomes. Instead the studies on which the guidelines were based measured “surrogate” outcomes, which were supposed to be as good as clinical outcomes, simple examples being the measurement of cholesterol levels in the case of statins and the measurement of red blood cell counts in the case of erythropoetin.
There are probably many reasons guideline writers are way out in front of the available evidence, not limited to their financial ties to industry previously documented in The New York Review [Marcia Angell, “Drug Companies & Doctors” NYR, January 15, 2009]. The biggest problem is not that there is a real likelihood of future regulators being dangerously overzealous in their application of guidelines, but that many guidelines are simply not justified by evidence.
Current examples are easy to find: the American College of Cardiology/American Heart Association guidelines on the management of (non-ST elevation) acute coronary syndromes recommends the routine use of ACE inhibitors despite no support from trials in this clinical scenario. The same guidelines also recommend treatment with cholesterol-lowering agents to achieve certain low cholesterol targets, the consequence of which is a huge industry involving repeat visits to clinicians for cholesterol measurements and dose adjustment or addition of drugs when these targets are not met. This is despite there being no convincing evidence that this is any better than simply prescribing a high-potency statin drug and sending the patient on his way.
Dr. Groopman seems to miss all this, believing that the major “conceptual error” guideline writers make is failing to recognize that their guidelines may not be applicable to “the individual patient.” I would have thought a bigger issue is whether they are applicable to any patient at all.
Senior Lecturer, Department of Medicine
University of Queensland
To the Editors:
Jerome Groopman glosses over the real danger in mammography: overdiagnosis. He did concede that one life would be saved for screening 1,904 women in their forties, but he left out that ten healthy women would be treated unnecessarily (i.e., surgery, radiation, chemo). This is based on an overdiagnosis rate of 30–70 percent (!) by Norwegian and Danish epidemiologists, which is naturally disputed by the American Cancer Society and others. If this is correct, it means that it is not just a matter of harmlessly delaying treatment for an indolent cancer; it is a matter of many of the invasive cancers identified by mammography and biopsy disappearing on their own, and better left undetected and “untreated.”
So it boils down to a philosophic choice. It would be nice if these discussions with patients could be encouraged, so they can make up their own minds. For those patients who just want the doctor to tell them what to do, they are asking the doctor to pretend he knows the right answer.
I appreciate Groopman drawing attention to “the focusing illusion,” but it may be more prevalent than he realizes. All oncologists have suffered through the premature deaths of wonderful women from breast cancer; how could they possibly know that something they treated brilliantly and humanely was never going to cause problems? Now there’s a potential focusing illusion.
Finally, Groopman brilliantly highlights the irony that the White House is pushing best practices as a cost-cutting measure, while this effort has not been shown to cut costs elsewhere, nor improve care.
Richard Ganz, M.D.
Dr. Pincus omits the first of the mandated “best practices” enumerated in my article: tight regulation of blood glucose in critically ill patients in the intensive care unit. This was among the most aggressively promulgated guidelines by the government and insurers. Contrary to his contention that prior recommendations relied on surrogates rather than meaningful clinical outcomes, tight regulation of blood glucose in ICU patients was based on randomized prospective clinical trials that measured death as the outcome.
These studies, as well as subsequent research that contradicted their findings, were published in The New England Journal of Medicine and are cited in the footnotes of my essay. The recommendation that I prematurely endorsed on erythropoietin treatment for cancer patients was based not only on increasing the “surrogate” of red blood cell counts but also data on improving quality of life, sparing patients the risks of transfusion, and preserving the precious resource of donated blood for other patients like those who hemorrhage.
These facts belie Dr. Pincus’s critique. Furthermore, Dr. Pincus oversimplifies the difficulties in crafting “prescriptive guidelines” that standardize therapies for the kinds of patients seen in daily clinical practice. Statistics from randomized trials, as he surely knows, represent averages of selected groups of patients. Knowing how to “best” treat an individual patient, particularly one who has concurrent medical problems that would have barred him from the clinical trial, requires referring to the guidelines but not necessarily adhering to them. But there is an even more important flaw in Dr. Pincus’s analysis. How experts judge the “quality” of evidence is hardly a uniform or objective process. Randomized controlled clinical trials, which are taken as usually yielding more reliable data, nonetheless are hotly debated among experts with respect to their design: which patients were included and excluded, which outcomes were “primary,” meaning the overriding aims of the study, and which were “secondary,” meaning providing information on other possible benefits of the treatment.
Different experts bring their own mindsets and biases to bear in judging not only the quality of evidence from these clinical trials but the tradeoffs between risks and benefits of the therapy. For example, in the randomized prospective studies of tight control of blood glucose in ambulatory diabetics with cardiovascular disease, there are indications of possible benefit with regard to protecting small blood vessels from the deleterious effects of diabetes, thereby sustaining kidney function and averting blindness; but offsetting these potential gains, tight control may promote heart attack and stroke and increase the risk of death.
Indeed, every physician has attended clinical conferences where credible specialists debate the sagacity of trial design and the trade-offs between risk and benefit of the treatment. But one need not enter a medical center to witness such a debate. The medical journals routinely publish editorials in conjunction with large randomized clinical trials in which independent researchers in the field point out the strengths and weaknesses of the studies. And within weeks of publication of the data, the same medical journals are filled with letters from credible critics who point out pitfalls in design and in the execution of the clinical trial and weigh in with their own interpretation of the risk versus benefit trade-off. The better press coverage of these trials includes the expert voices of both advocates and dissenters when presenting results of clinical research to the public. It is very rare that we have situations in clinical medicine in which a black or white answer is apparent.
Does this mean we should do away with guidelines? Not at all. Rather, the major point of my essay was the probity of mandates versus suggestions. If an expert committee is convened with the imperative to come to a consensus and write a mandated guideline, then it will do just that. But if patients and their physicians are provided with the full range of expert opinions, zealous and conservative, and specialists articulate such views, explaining how they weighed the “quality” of the evidence, then a great service is done to foster truly informed choices. Dr. Pincus ignores the reality that clinical data from randomized trials are imperfect and credible experts bring their biases to bear when they judge evidence to be of high or low quality, even when untainted by financial conflicts of interest.
Dr. Ganz cites an inference from a single epidemiological study that is far from proven, as the authors forthrightly state in their publication. There are no direct, prospective, and compelling data on breast cancer spontaneously remitting. A more important issue in breast cancer diagnosis and treatment is what is termed ductal carcinoma in situ, or DCIS. This is a very early stage of the malignancy, often detected by mammogram. There is considerable controversy about how often and how quickly DCIS grows into an invasive cancer, and whether it should be treated by surgery or radiation or hormonal blockers like Tamoxifen. There are a number of well-designed ongoing clinical studies to obtain better knowledge about DCIS, and with it, hopefully, provide women and their physicians with a sounder basis to make decisions.