Homeopathy and Science: Discussion, Summary and Conclusions

I was not surprised by a couple of the dissenting comments after Part IV of this blog. One writer worried that I had neglected, presumably for nefarious reasons, to cite replications of Benveniste’s results; another cited several examples of “positive” homeopathy studies that I had failed to mention. I answered some of those points here. I am fully aware of such “positive” reports, including those seeming to support Benveniste. I didn’t cite them, but not in some futile hope of concealing their existence from the watchful eyes of the readership. I also didn’t cite several “negative” reports, including an independent, disconfirming report of one of the claims of David Reilly, whose words began this series,* and the most recent of several reviews (referenced here) to conclude that “the clinical effects of homoeopathy are placebo effects.” I didn’t cite those reports for the same reasons that I didn’t cite the “positive” studies: they are mere footnotes to the overwhelming evidence against homeopathy.

To explain why, it will be necessary to discuss some of the strengths and weaknesses of the project known as “Evidence-Based Medicine.”

“Evidence-Based Medicine” Primer

Academic defenders of homeopathy and other implausible methods argue that even though such claims are highly implausible, they still ought to be subjected to studies. It would be arrogant, they say, to simply abandon them without investigation, and there is the potential to overlook significant therapeutic advances. They argue that randomized, controlled trials (RCTs), especially those in which both subjects and investigators are blinded to the interventions, will separate the “wheat from the chaff.” This type of study is the “gold standard” for evidence of therapeutic efficacy of any heretofore-unproven treatment, and has been instrumental in the objective determination of both worthwhile and worthless treatments for several decades.

Eventually, several studies of a treatment are examined in the aggregate, in the form of “meta-analyses” (if they are similar enough to combine data) or “systematic reviews” (if they are not). If such reviews can justify a strong conclusion for or against the value of the treatment, it will typically be accepted by physicians as the most rational basis for clinical decisions. This process and its literature are keys to the practice that is collectively referred to as “evidence-based medicine” (EBM). It applies not only to therapeutic decisions, but also to diagnostic tests and other aspects of clinical medicine.

Some might be surprised to find that EBM is not synonymous with “science-based medicine.” Although based on previous, evolving standards of clinical trial designs, statistics, epidemiological methods and other pertinent tools, EBM is a semi-formal movement within modern medicine that has existed for fewer than 20 years; it comprises sets of guidelines for assessing evidence, which will be discussed further below.

EBM and “CAM

To many in this era of EBM it seems self-evident that all unproven methods, including homeopathy, should be subjected to such scrutiny. After all, the anecdotal impressions that are typically the bases for such claims are laden with the very biases that blinded RCTs were devised to overcome. This opinion, however, is naive. Some claims are so implausible that clinical trials tend to confuse, rather than clarify the issue. Human trials are messy. It is impossible to make them rigorous in ways that are comparable to laboratory experiments. Compared to laboratory investigations, clinical trials are necessarily less powered and more prone to numerous other sources of error: biases, whether conscious or not, causing or resulting from non-comparable experimental and control groups, cuing of subjects, post-hoc analyses, multiple testing artifacts, unrecognized confounding of data due to subjects’ own motivations, non-publication of results, inappropriate statistical analyses, conclusions that don’t follow from the data, inappropriate pooling of non-significant data from several, small studies to produce an aggregate that appears statistically significant, fraud, and more.

Most of those problems are not apparent in primary reports. Several have already been discussed or referenced elsewhere on this site: here, here, here and here, for example. Academics active in the EBM movement are aware of most of them and want to correct them—as a quick scan of the contents of almost any major medical journal will reveal.

It is clear that such biases are more likely to skew the results of studies that are funded or performed by advocates. This has been found in studies of trials funded by drug companies, for example, as referenced here. In the case of “CAM,” the charge is supported by the preponderance of favorable reports in advocacy journals (here, here, and here) and by examples of overwhelmingly favorable reports emanating from regions with strong political motivations.

For those reasons we can predict that RCTs of ineffective claims championed by impassioned advocates will demonstrate several characteristics. Small studies, those performed by advocates or reported in advocacy journals, and those judged to be of poor quality will tend to be “positive.” The larger the study and the better the design, the more likely it is to be “negative.” Over time, early “positive” trials and reviews will give way to negative ones, at least among those judged to be of high quality and reported in reputable journals. In the aggregate, trials of ineffective claims championed by impassioned advocates will appear to yield equivocal rather than merely “negative” outcomes. The inevitable, continual citations of dubious reports will lead some to judge that the aggregate data are “weakly positive” or that the treatment is “better than placebo.” An example is the claim that stimulation of the “pericardium 6” acupuncture point is effective in the prevention and treatment of post-operative nausea and vomiting—a purportedly proven “CAM” method.

Homeopathic “Remedies” are Placebos

After 200 years and numerous studies, including many randomized, controlled trials (RCTs) and several meta-analyses and systematic reviews, homeopathy has performed exactly as described above. The best that proponents can offer is equivocal evidence of a weak effect compared to placebo. That is exactly what is expected if homeopathy is placebo.

Nevertheless, EBM advocates on the whole don’t see it that way. Those who want to see homeopathy vindicated, such as homeopath Wayne Jonas, the former director of the NIH Office of Alternative Medicine, point to the weakly positive evidence. Others, even those who find homeopathy implausible, are so convinced that EBM can answer the question (“Either homeopathy works or controlled trials don’t!”) that they call for more trials, with no end in sight. Such judgments expose a major weakness in EBM that is not apparent when the exercise is applied to plausible claims.

Evidence-Based Medicine and Evidence

When I use a word,” Humpty Dumpty said in a rather a scornful tone, “it means just what I choose it to mean — neither more nor less.
“The question is,” said Alice, “whether you can make words mean different things.”
“The question is,” said Humpty Dumpty, “which is to be master — that’s all.”

There are sources of substantial error in EBM that apply more to trials of implausible than plausible claims, and that are generally not acknowledged by academics. The first is that the EBM “levels of evidence” hierarchy renders each entry sufficient to trump those below it (Figure). Thus a “positive” clinical trial is given more weight than “physiology, bench research or ‘first principles’,” even when the latter definitively refute the claim.

Figure: Oxford Centre for Evidence-based Medicine Levels of Evidence (May 2001)

Level Therapy/Prevention, Aetiology/Harm
1a SR (with homogeneity*) of RCTs
1b Individual RCT (with narrow Confidence Interval‡)
1c All or none§
2a SR (with homogeneity*) of cohort studies
2b Individual cohort study (including low quality RCT; e.g., <80% follow-up)
2c “Outcomes” Research; Ecological studies
3a SR (with homogeneity*) of case-control studies
3b Individual Case-Control Study
4 Case-series (and poor quality cohort and case-control studies§§)
5 Expert opinion without explicit critical appraisal, or based on physiology, bench research or “first principles”

Grades of Recommendation

A consistent level 1 studies
B consistent level 2 or 3 studies or extrapolations from level 1 studies
C level 4 studies or extrapolations from level 2 or 3 studies
D level 5 evidence or troublingly inconsistent or inconclusive studies of any level

For judging homeopathy, EBM deems the equivocal results of clinical efficacy trials to be of more value than other evidence discussed in this series: definitive refutation of the “law of similars”; the doctrine of “infinitesimals” violating the second law of thermodynamics; no coherent bases for predicting consistency or validity of “symptoms” and “provings,” or of the homeopathic prescribing scheme, and studies confirming the lack of such validity; definitive refutations of Hahnemann’s magical “theories” of what diseases are and how homeopathy works, based on his notions of “Dynamic Deranging Irritations of the Vital Force”; later homeopaths’ arbitrary inventions of more implausible treatments, e.g., “nosodes” and “constitutional” prescribing; recent inventions of fantastic theories to explain the failings of the rest, e.g., “water memory,” “non-local” (psychic?) explanations or “quantum-like” effects to explain the “entanglement-disrupting effects of blinding” in clinical trials, and more.

Another way of thinking about this is to observe that homeopathy lacks several criteria that suggest a viable hypothesis: simplicity, conservatism, fruitfulness, and scope. [1] Regarding the last two, is there anything in nature that the tenets of homeopathy—the “eternal, infallible law of nature”—can explain better than can current scientific theory? I can’t think of a single natural phenomenon that homeopathic “theory” can explain at all. [2]  It doesn’t even explain homeopathy in a coherent way. Other, well-characterized but mundane social and psychological factors do that much better.

When this sort of evidence is weighed against the equivocal clinical trial literature, it is abundantly clear that homeopathic “remedies” have no specific, biological effects. Yet EBM relegates such evidence to “Level 5”: the lowest in the scheme. How persuasive is the evidence that EBM dismisses? The “infinitesimals” claim alone is the equivalent of a proposal for a perpetual motion machine. The same medical academics who call for more studies of homeopathy would be embarrassed, one hopes, to be found insisting upon studies of perpetual motion machines. Basic chemistry is still a prerequisite for medical school, as far as I’m aware.

In summary, the evidence that homeopaths’ perceptions are due to something other than what they claim is comparable to the evidence that the earth is spheroid rather than planar, that the planets orbit the sun rather than the earth, that the positions and movements of planets do not affect human affairs, that Newton’s gravitational theory holds to a high degree of precision for everything from apples falling from trees to the motions and long-distance effects of space ships, comets, and other celestial bodies, even though it is incomplete, that electricity and magnetism are the same thing, that mass is conserved, that the earth is several billion years old, that its crust comprises several “plates” that move around, that species evolved by a process of variation and natural selection, that Avagadro’s number is the same on Mars as it is here, and so forth. In other words, it is the sort of basic science that can reasonably be called “established knowledge.”

Is it realistic to assume that this “level” of evidence, when brought to bear on a claim that has no explanatory power in nature, can be overthrown by ambiguous clinical trials of dubious design? EBM makes that assumption.

It wasn’t meant to be like this. When I first discussed with my fellow bloggers the curious absence of established knowledge in the EBM “levels of evidence” hierarchy, at least one insisted that this could not be true, and in a sense he was correct. David Sackett and other innovators of EBM do include basic science in their discussions, but they recommend invoking it only when there are no clinical trials to consider:

Evidence based medicine is not restricted to randomised trials and meta-analyses. It involves tracking down the best external evidence with which to answer our clinical questions…And sometimes the evidence we need will come from the basic sciences such as genetics or immunology. It is when asking questions about therapy that we should try to avoid the non-experimental approaches, since these routinely lead to false positive conclusions about efficacy. Because the randomised trial, and especially the systematic review of several randomised trials, is so much more likely to inform us and so much less likely to mislead us, it has become the “gold standard” for judging whether a treatment does more good than harm.

That statement is consistent with EBM’s formal relegation of established knowledge to “level 5,” as seen in the Figure. I am not a historian of EBM and don’t care to be, but I suspect that the explanation for this choice is that “they never saw ‘CAM’ coming.” In other words, it probably didn’t occur to Sackett and other EBM pioneers that anyone would consider performing clinical trials of methods that couldn’t pass the muster of scientific plausibility. Their primary concern was to emphasize the insufficiency of basic science evidence in determining the safety and effectiveness of new treatments. In that they were quite correct, but trials of “CAM” have since reminded us that although established knowledge may be an insufficient basis for accepting a treatment claim, it is still a necessary one.

Lacking that perspective, Sackett’s Center for Evidence-Based Medicine promulgates an “Introduction to evidence-based complementary medicine” by “CAM” researcher Andrew Vickers. There is not a mention of established knowledge in it, although there are references to several claims, including homeopathy, that are refuted by things that we already know. Vickers is also on the advisory board of the Cochrane CAM Field, along with Wayne Jonas and several other “CAM” enthusiasts. The Cochrane Collaboration is a highly respected wellspring of “evidence,” in the EBM sense of the term. But its treatment of “CAM” claims suggests that “evidence” means just what the “CAM Field” chooses it to mean—neither more nor less. Perusing the Cochrane reviews of homeopathy reveals just how far down the rabbit hole the ghost of poor Archie Cochrane, the founder, has been led:

“In view of the absence of evidence it is not possible to comment on the use of homeopathy in treating dementia.”

“There is not enough evidence to reliably assess the possible role of homeopathy in asthma. As well as randomised trials, there is a need for observational data to document the different methods of homeopathic prescribing and how patients respond.”

“There is currently little evidence for the efficacy of homeopathy for the treatment of ADHD. Development of optimal treatment protocols is recommended prior to further randomised controlled trials being undertaken.”

“Though promising, the data were not strong enough to make a general recommendation to use Oscillococcinum for first-line treatment of influenza and influenza-like syndromes. Further research is warranted but the required sample sizes are large.”

And so on.

Next Week (or maybe later; this is time-consuming): Prior Probability: The Dirty Little Secret of “Evidence-Based Alternative Medicine”

[1] Schick, T Jr. and Lewis Vaughn. How to Think About Weird Things; Critical Thinking for a New Age (2nd Edition). Ch. 7. Mayfield Publishing Company. Mountain View, CA 1999

[2] Atwood KC. Homeopathy and Critical Thinking. Scientific Review of Alternative Medicine 5; 3: 146-148. Summer 2001


*The Homeopathy Series:

  1. Homeopathy and Evidence-Based Medicine: Back to the Future – Part I
  2. Homeopathy and Evidence-Based Medicine: Back to the Future – Part II
  3. Homeopathy and Evidence-Based Medicine: Back to the Future–Part III
  4. Homeopathy and Evidence-Based Medicine: Back to the Future Part IV
  5. Homeopathy and Evidence-Based Medicine: Back to the Future Part V
  6. Harvard Medical School: Veritas for Sale (Part III)
  7. The Dull-Man Law
  8. Smallpox and Pseudomedicine


The Prior Probability, Bayesian vs. Frequentist Inference, and EBM Series:

1. Homeopathy and Evidence-Based Medicine: Back to the Future Part V

2. Prior Probability: The Dirty Little Secret of “Evidence-Based Alternative Medicine”

3. Prior Probability: the Dirty Little Secret of “Evidence-Based Alternative Medicine”—Continued

4. Prior Probability: the Dirty Little Secret of “Evidence-Based Alternative Medicine”—Continued Again

5. Yes, Jacqueline: EBM ought to be Synonymous with SBM

6. The 2nd Yale Research Symposium on Complementary and Integrative Medicine. Part II

7. H. Pylori, Plausibility, and Greek Tragedy: the Quirky Case of Dr. John Lykoudis

8. Evidence-Based Medicine, Human Studies Ethics, and the ‘Gonzalez Regimen’: a Disappointing Editorial in the Journal of Clinical Oncology Part 1

9. Evidence-Based Medicine, Human Studies Ethics, and the ‘Gonzalez Regimen’: a Disappointing Editorial in the Journal of Clinical Oncology Part 2

10. Of SBM and EBM Redux. Part I: Does EBM Undervalue Basic Science and Overvalue RCTs?

11. Of SBM and EBM Redux. Part II: Is it a Good Idea to test Highly Implausible Health Claims?

12. Of SBM and EBM Redux. Part III: Parapsychology is the Role Model for “CAM” Research

13. Of SBM and EBM Redux. Part IV: More Cochrane and a little Bayes

14. Of SBM and EBM Redux. Part IV, Continued: More Cochrane and a little Bayes

15. Cochrane is Starting to ‘Get’ SBM!

16. What is Science? 



Posted by Kimball Atwood