The press release declares: “Acupuncture can relieve lower back/pelvic pain often experienced during pregnancy.” This is based on a meta-analysis of 10 studies. However, taking even a slightly close look at the data reveals a very different picture. This may reflect the difference between EBM (evidence-based medicine) and SBM (science-based medicine), the latter of which takes a more complete picture of how to analyze the patterns of evidence in the literature. But honestly, any competent EBM analysis should come to the same conclusion.

The study itself is a great example of the dictum “garbage in, garbage out” that plagues meta-analysis. Taking a bunch of flawed studies and adding them together does not create good data. These studies also reflect how deeply flawed the acupuncture literature is, and the frustratingly persistent pattern of reporting essentially negative studies or worthless studies as if they are positive. This meta-analysis, in my opinion, is not only completely consistent with acupuncture being nothing but placebo, it strongly points toward that conclusion.

The ultimate author conclusion was:

Acupuncture significantly improved pain, functional status and quality of life in women with LBPP during the pregnancy. Additionally, acupuncture had no observable severe adverse influences on the newborns. More large-scale and well-designed RCTs are still needed to further confirm these results.

This type of conclusion is now a cliché here at SBM – “more study is needed” is a euphemism for “the data is negative, but we really want this to work”. The study reveals both generic problems with meta-analysis, and the persistent problems with acupuncture studies, in an overall pattern that screams “this is a placebo intervention”. Let’s dive into the details.

Here is a table of the 10 included studies, showing the main quality characteristic of each study. You will notice that no study is green across the board – which is not only a reasonable standard, it is absolutely necessary for an intervention with a subjective outcome. Specifically, eight of the ten studies did not have a blinded outcome assessment. That alone means that the pooled data is essentially worthless. An unblinded assessment of a subjective endpoint, like pain, is essentially measuring placebo effects, with no way of teasing out any non-placebo effects. The two studies that had blinded outcome measures did not have blinded participants. Therefore, no single study was adequately double blinded. Would anyone accept this kind of data for a non-“alternative” intervention?

But it gets worse. Only two of the studies compared acupuncture to placebo acupuncture. One study compared acupuncture at two different times. The other seven studies compared acupuncture to no intervention or standard of care. So of course people in the no-intervention group know they are in the placebo group. These types of comparisons are essentially worthless for pain studies.

Two of the studies had a >20% drop out rate in the non-intervention group. A large drop out rate is always problematic, because of the possibility that it is not random. But it’s worse when the drop out is high specifically in the non-intervention group. This also tends to happen when you don’t properly blind subjects – why stay in a study when you know you are in the placebo group?

One way to support a subjective outcome in a clinical trial is to combine it with secondary outcomes that are more objective and quantifiable, even if they are indirect. In pain studies it is common practice to assess use of rescue pain medications. Whether or not a subject takes pain medication may say much more about how much pain they are in than what they subjectively report. In this meta-analysis, there was no difference in the use of analgesic medication between treatment and non-treatment groups. That is a pretty fatal blow to the overall results.

Essentially unblinded subjects who knew they were in the intervention group being assessed by study participants that also knew they were in the study group subjectively said they felt better, while still taking as much pain medication. There is zero reason to conclude from this data that the intervention is anything but placebo.

The individual studies are all of dubious quality, but is there also any evidence of publication bias? Funnel plots are often used to evaluate publication bias, because they provide a rapid visual representation of the data. A funnel plot has study quality on the vertical axis and effect size on the horizontal axis. What we like to see is symmetry, which means there is a fair statistical spread of results around the true effect size, with higher quality studies being closer to the mean (hence the inverted funnel shape).

What we see here is decidedly asymmetrical. This is a classic outcome indicating publication bias, with more studies on the right side of the funnel. But there is something else here which is so classic it merits pointing out specifically. You will notice the two outlier studies in the upper left of the funnel. This shows that the two highest quality studies hover around zero effect. In fact there is a general trend that can easily be seen in this plot, with the higher quality studies having smaller effect sizes, and the best studies being negative. This pattern, arguably more than anything else in the data, strongly points toward the null hypothesis – acupuncture is just placebo. We consistently see this pattern in studies of dubious phenomena.

There is a final feature of this meta-analysis that is specific to acupuncture – heterogeneity in treatment methods. Heterogeneity itself (differences from study to study) are a generic problem of meta-analysis, but especially plague acupuncture research. To understand this problem we have to back up a bit and ask – what is acupuncture? Basically, it is sticking fine needles into acupuncture points to elicit a physiological response that is supposed to be specific to the points used (though there is also an incredible amount of diversity packed into this basic idea). If you can poke the skin randomly with toothpicks (without penetration and at random points) is that really acupuncture?

In fact, the totality of the acupuncture literature itself is consistent with the conclusion that acupuncture points do not exist. Further, in the last century no basic science has emerged to support or explain the existence of acupuncture points. If acupuncture points don’t exist, then acupuncture is not real. One important line of evidence for this conclusion is that acupuncturists can’t agree on where acupuncture points are precisely, and what they are used for. If acupuncture points were real, then the evidence should start to converge on reality. If acupuncture is astrology (which it is), then acupuncture points will remain in the realm of personal opinion.

In this meta-analysis, seven studies used body acupuncture while three used auricular acupuncture (on the ear). Are these even really studying the same intervention? As you can see in the comparison table, none of the ten studies used the same acupuncture points as any of the other studies. There are a few points of overlap, but the points chosen are far more different than they are similar. In most acupuncture studies the points are chosen by consensus of the study designers. There is no standardization – because acupuncture points are not real.

Conclusion: For acupuncture, GIGO rules

This meta-analysis should properly be interpreted as negative – as consistent with the null hypothesis that there is no specific effect of acupuncture in the studied intervention beyond placebo. I outlined above why this is overwhelmingly true. But again, the most damning evidence is the inverse relationship between study quality and effect size, with the best studies being negative.

This evidence certainly does not justify rejecting the null hypothesis, and that’s all that really matters. This is especially true of an intervention with an extremely low prior plausibility, and lack of a known or plausible mechanism. No matter how you look at acupuncture, from a historical, basic science, or clinical evidence perspective, it is nothing but an elaborate placebo. This meta-analysis is perfectly consistent with that conclusion. And yet, studies like this are consistently misrepresented to falsely support acupuncture.

Author

  • Founder and currently Executive Editor of Science-Based Medicine Steven Novella, MD is an academic clinical neurologist at the Yale University School of Medicine. He is also the host and producer of the popular weekly science podcast, The Skeptics’ Guide to the Universe, and the author of the NeuroLogicaBlog, a daily blog that covers news and issues in neuroscience, but also general science, scientific skepticism, philosophy of science, critical thinking, and the intersection of science with the media and society. Dr. Novella also has produced two courses with The Great Courses, and published a book on critical thinking - also called The Skeptics Guide to the Universe.

Posted by Steven Novella

Founder and currently Executive Editor of Science-Based Medicine Steven Novella, MD is an academic clinical neurologist at the Yale University School of Medicine. He is also the host and producer of the popular weekly science podcast, The Skeptics’ Guide to the Universe, and the author of the NeuroLogicaBlog, a daily blog that covers news and issues in neuroscience, but also general science, scientific skepticism, philosophy of science, critical thinking, and the intersection of science with the media and society. Dr. Novella also has produced two courses with The Great Courses, and published a book on critical thinking - also called The Skeptics Guide to the Universe.