Shares

One of the core features of science (and therefore science-based medicine) is to precisely identify and control for variables, so that we know what, exactly, is exerting an effect. The classic example of this principle at work is the Hawthorne effect. The term refers to a series of studies performed between 1924 and 1932 at the Hawthorne Works. The studies examined whether or not workers would be more productive in different lighting conditions. So they increased the light levels, observed the workers, and found that their productivity increased. Then they lowered the light levels, observed the workers, and found that their productivity increased. No matter what they did, the workers improved their productivity relative to baseline. Eventually it was figured out that observing the workers caused them to work harder, no matter what was done to the lighting.

This “observer effect” – an artifact of the process of observation – is now part of standard study design (at least well-designed studies). In medical studies it is one of the many placebo effects that need to be controlled for, in order to properly isolate the variable of interest.

There are many non-specific effects – effects that result from the act of treating or evaluating patients rather than a physiological response to a specific treatment. In addition to observer effects, for example, there is also the “chearleader” effect from encouraging patients to perform better. There are training effects from retesting. And there are long-recognized non-specific therapeutic effects just from getting compassionate attention from a practitioner. It is a standard part of medical scientific reasoning that before we ascribe a specific effect to a particular intervention, that all non-specific effects are controlled for and eliminated.

Within the world of so-called “complementary and alternative medicine” (CAM), however, standard scientific reasoning it turned on its head. After failing to find specific physiological benefits for many treatments under the CAM umbrella, proponents are desperately trying to sell non-specific effects as if they are specific to their preferred modalities. In other contexts this might be considered fraud. It is certainly scientifically dubious to the point of dishonesty, in my opinion.

The latest example of this is a study published in the journal Rheumatology: Homeopathy has clinical benefits in rheumatoid arthritis patients that are attributable to the consultation process but not the homeopathic remedy: a randomized controlled clinical trial. The study compared 5 groups: the first three received a homeopathic consultation and either individualized, complex (meaning a standard preparation), or placebo treatment for rheumatoid arthritis; the last two received no consultation and either complex homeopathic treatment or placebo. The study was double-blind with respect to the treatments, but not blinded with respect to whether or not a subject received a homeopathic consultation.

The results:

Fifty-six completed treatment phase. No significant differences were observed for either primary outcome. There was no clear effect due to remedy type. Receiving a homeopathic consultation significantly improved DAS-28 [mean difference 0.623; 95% CI 0.1860, 1.060; P = 0.005; effect size (ES) 0.70], swollen joint count (mean difference 3.04; 95% CI 1.055, 5.030; P = 0.003; ES 0.83), current pain (mean difference 9.12; 95% CI 0.521, 17.718; P = 0.038; ES 0.48), weekly pain (mean difference 6.017; 95% CI 0.140, 11.894; P = 0.045; ES 0.30), weekly patient GA (mean difference 6.260; 95% CI 0.411, 12.169; P = 0.036; ES 0.31) and negative mood (mean difference − 4.497; 95% CI −8.071, −0.923; P = 0.015; ES 0.90).

In other words – there was no difference in any outcome measure among individualized homeopathic treatments, complex (standardized) treatment, or placebo. According to this study – homeopathy does not work for rheumatoid arthritis. That is really the only thing we can conclude from this study.

However, the authors spend a great deal of time trying to argue that the study supports the conclusion that, even though homeopathic products have zero effect, the homeopathic consultation process does have a beneficial effect. This is a flawed conclusion on several levels.

First, the study was not blinded for consultation vs no-consultation. Any comparison of this variable, therefore, is highly unreliable. It is likely no coincidence that the blinded comparisons were all negative, while some of the unblinded comparisons were positive. Also, even there the results are weak. The primary outcome measure was negative. Only the secondary outcome measures, which are mostly subjective, were positive.

And, perhaps most significantly, the study was not even designed to test the efficacy of the homeopathic consultation itself because it was compared essentially to no intervention (and in an unblinded fashion). There is therefore every reason to conclude that any perceived benefit from the consultation process is due to the nonspecific effects of the clinical interaction – attention from a practitioner, expectation of benefit, chearleader effect, etc.

If the authors wished to test whether there is something special about the homeopathic consultation then they should have compared it to attention from a health care provider that was controlled for time and personal attention, but did not contain elements specific to the homeopathic consultation. This study did not do that.

Conclusion

The study itself was reasonably well designed. It is a bit small, especially for the number of comparison groups, and so was underpowered. But it did make a reasonable and blinded comparison between homeopathic preparations and placebo and found no difference. This is in line with research into homeopathy in general – it is no different from placebo, which means homeopathy does not work.

It is worth pointing out that homeopaths have complained in the past that comparing standardized homeopathic complex to placebo is not fair, because homeopathy requires and individualized treatment. This, of course, contradicts the claims for any homeopathic product on the shelves. But that point aside, in this study individualized treatments performed no better, contradicting that complaint.

The authors, however, are presenting the study as evidence that the homeopathic consultation works, when this study was not designed to test that variable. The effects can easily be explained as the non-specific effects of therapeutic attention. The study provides no basis upon which to conclude that there is any value to a homeopathic consultation beyond the raw benefit of time and attention.

The study does reinforce what previous studies have shown – attention from a provider does have measurable benefits to quality of life and subjective symptoms (even if not disease-altering). This should prompt a re-evaluation of the priorities given to reimbursing for provider time. It should be noted that homeopaths generally charge directly for their time, and are not typically contracted to accept an insurance-based fee schedule. So comparing a typical homeopathic consultation to a busy physician’s office is not fair.

Further – if we are going to advocate for increased money for provider time and attention, such resources should go to providers who are science-based and use modalities that have efficacy on their own. It is highly misguided, wasteful, and potentially dangerous to rely upon practitioners who utilize an unscientific philosophy and treatments that are admittedly worthless, just because there are non-specific subjective benefits from the attention given.

Edzard Ernst makes the same points in an editorial that the journal Rheumatology ran in the same issue. While I question the decision to publish this study with the conclusion as written (peer-reviewers and editors should have  been much harsher on the hype and spin of the authors), at least they had the courage to include Ernst’s comments as well.

Shares

Author

  • Founder and currently Executive Editor of Science-Based Medicine Steven Novella, MD is an academic clinical neurologist at the Yale University School of Medicine. He is also the host and producer of the popular weekly science podcast, The Skeptics’ Guide to the Universe, and the author of the NeuroLogicaBlog, a daily blog that covers news and issues in neuroscience, but also general science, scientific skepticism, philosophy of science, critical thinking, and the intersection of science with the media and society. Dr. Novella also has produced two courses with The Great Courses, and published a book on critical thinking - also called The Skeptics Guide to the Universe.

Posted by Steven Novella

Founder and currently Executive Editor of Science-Based Medicine Steven Novella, MD is an academic clinical neurologist at the Yale University School of Medicine. He is also the host and producer of the popular weekly science podcast, The Skeptics’ Guide to the Universe, and the author of the NeuroLogicaBlog, a daily blog that covers news and issues in neuroscience, but also general science, scientific skepticism, philosophy of science, critical thinking, and the intersection of science with the media and society. Dr. Novella also has produced two courses with The Great Courses, and published a book on critical thinking - also called The Skeptics Guide to the Universe.