Shares

The cycle should by now be very familiar – a new acupuncture study purports to show that acupuncture works for something. The press eats it up gullibly. Proponents shout it from the rooftops, declaring that they finally have proof. But then skeptics point out the deep flaws in the study and that it really doesn’t prove anything. Unfortunately this cycle has resulted in the slow creep of acupuncture into medicine, because most people don’t pay attention long enough to read the critical analysis.

So let’s complete the cycle for yet another acupuncture study, this one for chronic stable angina – heart pain resulting from poor circulation. Here are the main findings:

This randomized clinical trial that included 404 patients with chronic stable angina found that acupuncture on the acupoints in the disease-affected meridian significantly reduced the frequency of angina attacks compared with acupuncture on the acupoints on the nonaffected meridian, sham acupuncture, and no acupuncture.

Superficially it seems reasonable. The size is not huge but it’s not small either. There are appropriate controls, and they said it was blinded. The results are all statistically significant:

Mean changes in frequency of angina attacks differed significantly among the 4 groups at 16 weeks: a greater reduction of angina attacks was observed in the DAM group vs the NAM group (difference, 4.07; 95% CI, 2.43-5.71; P < .001), in the DAM group vs the SA group (difference, 5.18; 95% CI, 3.54-6.81; P < .001), and in the DAM group vs the WL group (difference, 5.63 attacks; 95% CI, 3.99-7.27; P < .001).

OK, so why am I not convinced? Of course, one clinical study is never convincing by itself, but beyond that there are plenty of reasons to be suspicious of this study. First, it purports show that acupuncture in the “correct” meridians is more effective than in the “incorrect” meridians. I use scare quotes because over a century of medical science leads to the extremely firm conclusion that meridians do not exist. They have no basis is anatomy, physiology, biology, or reality. They are based entirely on prescientific notions of life energy. So it is appropriate to look skeptically on any study that claims suddenly they are real, just as I would be skeptical of a study that claimed that the four humors or miasmas were real.

We also have thousands of clinical studies which overwhelmingly show no difference in effect from doing acupuncture in different locations. The few studies that do show a difference are outliers. Further, it’s not just the number of studies, but the quality of studies that matter. The better studies tend to show no difference, and it only seems like you see a difference if there is one or more major flaws in the study. So what about this one?

One major flaw you have to read deep into the study to find is that the “acupuncture” treatments included electrical stimulation. Electrical stimulation is a treatment unto itself, with its own physiological effects. “Electroacupuncture” is simply a way to mix two different interventions, so that they cannot be teased apart. So we then have no idea which component had the effect. History here is a guide, however, as many of the studies that purport to show acupuncture works cheat by actually doing electrical stimulation in the guise of acupuncture.

Further, the study was single-blinded in that the acupuncturists who were doing the acupuncture were not blinded. They knew if they were in the treatment group or not. This is a critical flaw, and prior studies show that the interaction with the acupuncturist is actually the most important factor in determining a subjective response. To compound this weakness, there was no attempt to measure the success of patient blinding. So it is highly possible that the acupuncturists influenced the subjects, and that this was entirely responsible for the observed effect.

In support of this conclusion is the fact that there was very little difference between the sham acupuncture and no intervention groups. This flies in the face of usual outcomes in such trials. In prior acupuncture trials looking at subjective outcomes, there is almost always a difference in the unblinded comparison between getting some type of acupuncture (real, sham, or simulated) and the no intervention group. The lack of a significant difference in this study to me strongly suggests that the sham acupuncture group was unblinded by their interactions with the acupuncturist.

Underlying this conclusion is the fact that the outcome was entirely subjective – reports of angina. There were no objective physiological measures to back up simply recording symptoms.

So once again, we have a purely subjective outcome, with poor blinding, no assessment of the blinding, and dubious outcomes (lack of a difference between sham acupuncture and nothing). Further, the results are muddied by the use of electrical stimulation.

Finally there is a meta-reason, if you will, to be suspicious of this study – as noted by Edzard Ernst, the authors are all Chinese and affiliated with schools of acupuncture or Traditional Chinese Medicine. It is legitimate to consider the role of bias in assessing a study, and cultural and political biases can be powerful. But we don’t have to speculate about this, we have evidence.

In 1998 Vickers et al published a review of acupuncture studies in various countries and found that:

No trial published in China or Russia/USSR found a test treatment to be ineffective.

That’s right – 100% of acupuncture studies published in China were positive. Even for a treatment that clearly works, this result is all but statistically impossible. It is powerful evidence of bias, probably a combination of researcher bias and publication bias. Lest you think this result is 21 years old and therefore perhaps no longer valid, an updated study in 2014 again found that:

This review found 847 reported randomized clinical trials of acupuncture in Chinese journals. 99.8% of these reported positive results.

There is even evidence to partially explain these suspicious results. Reviews have found lower methodological standard in Chinese acupuncture studies, compared to the West:

The methodological quality was lower than the international standard and only one RCT paper described the trial technological process. And there were a less reports about influencing RCT quality, such as acupuncturist’s qualification and education background (3.06%), adverse reaction (4.23%), blind method (5.98%) and follow-up (14.43%).

Meanwhile a 2017 review found that only 8.7% of clinical trials of TCM (half of which are acupuncture studies) that were registered in clinicaltrials.gov had reported their findings. So there is evidence of poor methodology and publication bias. The current study is a great example of what we typically see – there is always wiggle room in the methodology for a little bit of bias, sufficient to explain the results.

After decades and thousands of acupuncture studies, there is simply no reason for these methodological flaws, which have all been pointed out numerous times before. Unless, of course, they are a feature and not a bug. The really rigorous studies of acupuncture tend to be negative, and consistently show no effect from needle position or penetration. The totality of the clinical evidence strongly favors the conclusion that acupuncture is a theatrical placebo will little actual specific therapeutic effect.

Cutting corners on methodological rigor is one way to manufacture positive evidence. There is strong cultural and political pressure to promote TCM, and China has been plagued by scientific misconduct and fraud that dwarfs other countries in its magnitude.

Once again we have a study being promoted as a game-changer for acupuncture, but is actually just more of the same – highly flawed, and completely unconvincing.

Shares

Author

  • Founder and currently Executive Editor of Science-Based Medicine Steven Novella, MD is an academic clinical neurologist at the Yale University School of Medicine. He is also the host and producer of the popular weekly science podcast, The Skeptics’ Guide to the Universe, and the author of the NeuroLogicaBlog, a daily blog that covers news and issues in neuroscience, but also general science, scientific skepticism, philosophy of science, critical thinking, and the intersection of science with the media and society. Dr. Novella also has produced two courses with The Great Courses, and published a book on critical thinking - also called The Skeptics Guide to the Universe.

Posted by Steven Novella

Founder and currently Executive Editor of Science-Based Medicine Steven Novella, MD is an academic clinical neurologist at the Yale University School of Medicine. He is also the host and producer of the popular weekly science podcast, The Skeptics’ Guide to the Universe, and the author of the NeuroLogicaBlog, a daily blog that covers news and issues in neuroscience, but also general science, scientific skepticism, philosophy of science, critical thinking, and the intersection of science with the media and society. Dr. Novella also has produced two courses with The Great Courses, and published a book on critical thinking - also called The Skeptics Guide to the Universe.