Background: the distinction between EBM and SBM
An important theme on the Science-Based Medicine blog, and the very reason for its name, has been its emphasis on examining all the evidence—not merely the results of clinical trials—for various claims, particularly for those that are implausible. We’ve discussed the distinction between Science-Based Medicine (SBM) and the more limited Evidence-Based Medicine (EBM) several times, for example here (I began my own discussion here and added a bit of formality here, here, and here). Let me summarize by quoting John Ioannidis:
…the probability that a research finding is indeed true depends on the prior probability of it being true (before doing the study), the statistical power of the study, and the level of statistical significance.
EBM, in a nutshell, ignores prior probability† (unless there is no other available evidence) and falls for the “p-value fallacy”; SBM does not. Please don’t bicker about this if you haven’t read the links above and some of their own references, particularly the EBM Levels of Evidence scheme and two articles by Steven Goodman (here and here). Also, note that it is not necessary to agree with Ioannidis that “most published research findings are false” to agree with his assertion, quoted above, about what determines the probability that a research finding is true.
The distinction between SBM and EBM has important implications for medical practice ethics, research ethics, human subject protections, allocation of scarce resources, epistemology in health care, public perceptions of medical knowledge and of the health professions, and more. EBM, as practiced in the 20 years of its formal existence, is poorly equipped to evaluate implausible claims because it fails to acknowledge that even if scientific plausibility is not sufficient to establish the validity of a new treatment, it is necessary for doing so.
Thus, in their recent foray into applying the tools of EBM to implausible health claims, government and academic investigators have made at least two, serious mistakes: first, they have subjected unwary subjects to dangerous but unnecessary trials in a quest for “evidence,” failing to realize that definitive evidence already exists; second, they have been largely incapable of pronouncing ineffective methods ineffective. At best, even after conducting predictably disconfirming trials of vanishingly unlikely claims, they have declared such methods merely “unproven,” almost always urging “further research.” That may be the proper EBM response, but it is a far cry from the reality. As I opined a couple of years ago, the founders of the EBM movement apparently “never saw ‘CAM’ coming.”
The “Gonzalez” Trial
One such dangerous and unnecessary trial was “Evaluation of Intensive Pancreatic Proteolytic Enzyme Therapy with Ancillary Nutritional Support Versus Gemcitabine Chemotherapy in the Treatment of Inoperable Pancreatic Adenocarcinoma,” begun in 1999. It was funded by the National Cancer Institute (NCI) and the National Center for Complementary and Alternative Medicine (NCCAM), and was conducted by Nicholas Gonzalez and investigators at Columbia University. The ‘enzyme therapy’ part of the trial is more commonly known as the “Gonzalez Detoxification Regimen”:
Patients receive pancreatic enzymes orally every 4 hours and at meals daily on days 1-16, followed by 5 days of rest. Patients receive magnesium citrate and Papaya Plus with the pancreatic enzymes. Additionally, patients receive nutritional supplementation with vitamins, minerals, trace elements, and animal glandular products 4 times per day on days 1-16, followed by 5 days of rest. Courses repeat every 21 days until death despite relapse. Patients consume a moderate vegetarian metabolizer diet during the course of therapy, which excludes red meat, poultry, and white sugar. Coffee enemas are performed twice a day, along with skin brushing daily, skin cleansing once a week with castor oil during the first 6 months of therapy, and a salt and soda bath each week. Patients also undergo a complete liver flush and a clean sweep and purge on a rotating basis each month during the 5 days of rest.
More than two years ago I wrote a series of posts* discussing Gonzalez, the regimen, the history leading to the funding of the trial, and aspects of the trial itself. In summary, Gonzalez appears to be a dangerous quack who should have had his medical license stripped by the state of New York during the 1990s, but was saved at the 11th hour by naïve “alternative medicine” enthusiasm; the regimen was highly implausible; there were no prior animal or clinical studies sufficient to warrant a human trial; its real impetus was the gathering political strength of the anti-intellectual “health freedom” movement in the aftermath of the Laetrile wars, culminating in Congressman Dan Burton’s bullying of NCI Director Richard Klausner; the NCI and Columbia subsequently justified the trial by citing a dubious case series provided by Gonzalez himself; the trial was unethical in numerous ways, amounting to torture-until-death for at least one hapless subject who, desperate for anything that might work, had stumbled into it because of his own scientific naïveté and because his consent was uninformed by existing knowledge—which, of course, the Columbia investigators should have provided him.
Shortly after my first series of posts it became clear that the Columbia investigators had found the Gonzalez regimen sufficiently inferior to the standard chemotherapy regimen to have stopped the trial early, in 2005. Finally, in August of 2009, the report of the trial, by John Chabot and colleagues (Gonzalez’s name was conspicuously missing from the report), was published by the Journal of Clinical Oncology (JCO): subjects in the Gonzalez arm had fared terribly, not only much worse than subjects in the chemotherapy arm, but also worse than 20,000 historical controls gleaned from the Surveillance, Epidemiology, and End Results (SEER) Program of the National Cancer Institute.
In my last post on the topic, I explained numerous problems with the formal report, contrasting it with the Consolidated Standards of Reporting Trials (CONSORT), which the JCO expects authors to honor. I explained that the results were nevertheless sufficient to disqualify the Gonzalez Regimen once and for all, Gonzalez’s own objections notwithstanding. I explained why Gonzalez’s name had not appeared in the article. I reiterated that the trial was unethical in the extreme, and lamented the JCO’s decision to publish the report—in direct violation of this statement in the Declaration of Helsinki, to which the JCO claims allegiance:
30. Authors, editors and publishers all have ethical obligations with regard to the publication of the results of research… Reports of research not in accordance with the principles of this Declaration should not be accepted for publication.
I mentioned ways in which the results of the trial could have been made public without violating that important statement. Since things had not turned out that way, I hoped that the JCO would eventually publish “an editorial acknowledging the ethical problems with the trial and the dilemma involved in choosing to publish the report.” I sent an email to an acquaintance who was a member of the JCO editorial board, asking that he or Editor-in-Chief Daniel Haller would recognize that the issue oughtn’t be ignored. I cited the pertinent SBM posts. I was ignored. When no editorial or other comment appeared in the JCO over the next few months, I stopped looking.
Only recently I noticed that the JCO had solicited an editorial after all; authored by Dr. Mark Levine of McMaster University, it appeared online in March, 2010. The title offered a bit of hope that the journal had finally “got it”:
Conventional and Complementary Therapies: A Tale of Two Research Standards?
Alas, ‘twas not to be. The first clue is found at the beginning of the essay:
In the early 1990s, I began to notice that some patients were choosing alternatives to the mainstream or conventional treatments. These so-called alternative therapies included a range of interventions, such as dietary and behavioral interventions, vitamin supplements and herbs, and traditional systems such as Chinese and homeopathic medicines. Most of these were not supported by sound scientific methods.
Notice the word “methods.” Dr. Levine didn’t write, “most of these were not supported by sound science.” This suggests that he was partially blinded by the EBM understanding of “evidence” and “scientific.” The next clue consists of the entire second paragraph, a series of bland restatements of proponents’ definitions (from the NCCAM, in this case) and ambiguous assertions about “CAM” that have come to be typical for medical academics who ought to know better. But I venture from the main point.
In the third paragraph, Dr. Levine verifies what we had suspected above:
I am fortunate to have spent my entire academic career at McMaster University (Hamilton, Ontario, Canada), the birthplace of evidence-based medicine, and to have had the privilege of learning from colleagues such as David Sackett, MD, and Gord Guyatt, MD.
Before proceeding, let me acknowledge—particularly for the benefit of Dr. Levine or any other EBM aficionados who may read this—that his essay is, in some ways, a good one. He was asked to comment on two reports of “CAM” treatments that had appeared in the JCO: the Gonzalez trial and a trial of acupuncture for hot flashes in women receiving anti-estrogen therapy for breast cancer. In each case he made valid criticisms of the methods and of the authors’ conclusions. He stressed that the authors hadn’t specified the questions that they were trying to answer. He wrote that “non-inferiority” studies require larger sample sizes than “superiority” studies, but that the authors of these reports hadn’t offered power calculations to justify their (small) sample sizes. He pointed out that the “control” intervention in the acupuncture trial, venlaxafine (Effexor), doesn’t work very well, thus implying that it may mean very little to judge a novel treatment similarly efficacious.
Nevertheless, in regard to the Gonzalez trial Dr. Levine faltered even as he applied what would usually be a solid EBM-style criticism:
The question addressed in the original randomized trial by Chabot et al was not specified. Was the hypothesis that chemotherapy is better than enzyme therapy, or was it that enzyme therapy is no worse than chemotherapy? My answer to this is, “I do not know.”
While this is technically correct, it misses the more important point, which can only be appreciated by looking beyond the trial itself: the interesting question, given the implausible nature of the method, is whether “enzyme therapy” for cancer of the pancreas is anything like what Gonzalez and his patrons had cracked it up to be prior to the trial. The most reasonable cohort to compare this group to, then, is not the chemotherapy arm reported in the JCO article, but Gonzalez’s case series that had ostensibly justified the trial in the first place:
As of 12 January 1999, of 11 patients entered into the study, 9 (81%) survived one year, 5 (45%) survived two years, and at this time, 4 have survived three years. Two patients are alive and doing well: one at three years and the other at four years.
Contrast those results with the 32 “enzyme-group” subjects reported in the JCO: 7 were alive at 10 months, 2 at 20 months, 1 at 30 months, and none at 40 months. Median survival was 4.3 months. We do not need formal statistics or a new, randomized trial with a larger sample size to justify dismissing the Gonzalez regimen.
Next (this weekend, I promise): Human Studies Ethics: why Science Matters
*The “Gonzalez Regimen” Series:
† The Prior Probability, Bayesian vs. Frequentist Inference, and EBM Series:
16. What is Science?