It has been fascinating (and frustrating) watching the cultural shift in attitudes toward acupuncture over the last two decades. It is a study case in how to promote pseudoscience, how to exploit the vulnerabilities in evidence-based medicine, and how effective propaganda can be, even against professionals who should know better. It also demonstrates the dire need for the inclusion of science-based medicine principles when evaluating medical interventions.

A recent Washington Post article by Trisha Pasricha, MD, MPH illustrates nicely the misinformation, and also selective use of information, that leads even some professionals to the wrong conclusion. She comes to a few dubious conclusions, starting with this:

A 2018 meta-analysis of over 20,000 patients in 39 high-quality randomized controlled trials found that acupuncture was superior to both sham and no acupuncture for back or neck pain, osteoarthritis, headaches and shoulder pain. These outcomes mostly persisted over time — even after 12 months of receiving treatment.

She is referring to the latest Vickers et al systematic review of acupuncture for chronic pain. David Gorski has done a good job of deconstructing these reviews, and I will again point out the limitations. But first I will point out how this is a good example of the selective use of information by Pasricha, because she did not refer to a later 2020 systematic review (which included a review of Vickers 2018 review) which concluded:

Evidence from SRs suggests that there are insufficient high-quality RCTs to judge the efficacy of acupuncture for chronic pain associated with various medical conditions.

In other words, the evidence is too low quality to conclude that acupuncture works, as desperate as proponents are to say we can reach that conclusion. This is also how different experts can look at the same data and come to different conclusions – it depends on how much you weight different factors, what you consider an “acceptable” study, and how you control for bias. There is also a lot of spin. Even in the very favorable Vickers review, the authors acknowledge at the end:

Variations in the effect size of acupuncture in different trials are driven predominately by differences in treatments received by the control group rather than by differences in the characteristics of acupuncture treatment.

This understated admission is doing a lot of heavy lifting, but can easily be missed by the unwary. What this means is that differing “characteristics of acupuncture” (more on this below) do not affect treatment outcome. What determines the effect size (difference between sham and verum acupuncture) is how good the sham acupuncture is at simulating verum acupuncture. This is a pattern in the acupuncture literature we have long pointed out. The better the study, the larger the study, and the better the blinding, the smaller the effect size, with the best studies being negative. As you improve the control and rigor of the study, the effect of acupuncture diminishes to zero. That is a pattern in the research we see across all pseudosciences. It is a giant flashing light indicating that acupuncture is nothing but placebo.

But – if you just do a meta-analysis across the spectrum of study quality up to some arbitrary cutoff, you can see statistical significance. You just have to ignore all the troubling patterns within the data. This is the difference between the 2018 review and the 2020 review above.

Acupuncture has some special problems all its own as well, which I have summarized many times. Acupuncture is hard to blind, and only the best studies do it adequately. Both the subject and the acupuncturist have to be blinded, or else the interaction with the unblinded acupuncturist will have an effect. China famously has 100% positive clinical trials of acupuncture, a statistical impossibility, and these studies contaminate systematic reviews. Studies using electroacupuncture also contaminate reviews, introducing another variable (electrical stimulation) that cannot be separated from whatever the needles might be doing.

But also, devastating all by itself, is the heterogeneity of study design and outcome. This stems from the big glaring problem with acupuncture – what, if anything, are acupoints? Pasricha writes:

The World Health Organization consensus recognizes 361 standardized acupoints on the human body. Acupoints appear to respond to varied stimulation, such as from pressure, heat and electricity.

There’s still a lot we don’t know about how acupoints work. Some studies have shown that traditional acupoints may have a high density of nerve endings and mast cells. Stimulating these areas may lead to the release of chemicals in the body (such as hormones) and ultimately impact the brain.

This is a horrible summary of the science of acupoints. For example, she mentions that the WHO recognizes 361 acupoints, but does not point out that this number, traditionally 365 points, is based on the number of days in the year, because acupoints are a form of astrology in which astrological patterns are projected onto the body.

She could have also pointed out that systematic reviews, even by acupuncturists, fail to demonstrate where the acupoints actually are. They cannot be localized and there is no agreement on where they are or what they allegedly do. In short, acupuncture points don’t exist. They are a figment of a prescientific superstition. After, what, 60 or so years of research, there is still no convincing evidence of any anatomical, biochemical, or physiological basis for acupoints. Only tantalizing studies that show maybe some whisper of signal in a lot of noise, that cannot be reproduced or confirmed. Acupoints are the medical equivalent of blobsquatch.

This gets to the heterogeneity problem (one fatal aspect of it). When you combine different acupuncture studies for a systematic review or meta analysis, invariably you are combining studies that used different acupuncture points for the same problem. How is that considered verum acupuncture? Does that mean that all of the points used in various combinations are correct (even within the giant error bars of where the points are)? Meanwhile, elsewhere on the body is incorrect? Is there even an elsewhere? With hundreds of giant overlapping alleged acupoints, nowhere on the body is not an acupoint. And since there is absolutely no standardization in terms of which acupoints to use, any pattern is apparently acceptable.

The absence of standardization is another glaring feature of pseudoscience. Each acupuncturist can determine for themselves which acupoints to use for a particular problem, and where, exactly, those points are. For clinical trials, typically the acupuncturists involved in the study just decide by consensus which points to use, which is why different studies use different points.

The problem with a lack of mechanism for acupuncture has motivated proponents to look at increasingly noisy and unreliable methods for trying to find “stuff that happens” when you stick someone with needles. Pasricha refers to one study in carpal tunnel syndrome.

So scientists looked at the subjects’ brains using functional MRI imaging. They found that needling at the wrist and ankle both resulted in significant changes to how stimulation to the fingers was mapped onto the cerebral cortex.

That’s not really what they found. What they found, in my opinion, was random and not predicted changes that were all over the place. This is the typical noise we see in fMRI studies. They mean nothing without consistent replication, which we don’t have. Sometimes the changes were in the affected hand, sometimes they were not, and it did not depend on where the acupuncture was done.

But there is also separate reason to suspect some bias and p-hacking in the data. The study also found that sticking acupuncture in the opposite ankle increase nerve conduction times in the affected wrist. This is not something that can be mediated through some remote and hypothetical brain effect. They need to explain how sticking a needle in the left ankle will cause the right median nerve at the wrist to have better myelination. No fMRI study of the brain is going to explain this. Something that unlikely is evidence of bias or poor research methods until proven otherwise.

Following the acupuncture story closely for 30 years also gives one a good perspective – because there seems to be an never-ending revolving door of these preliminary studies, and proponents breathlessly reassuring that we are finally about to explain how acupuncture works. But we never seem to get closer. Acupuncture research has been chasing its tail for decades, because acupuncture is a fiction. There are no acupoints, and the effects of acupuncture are nothing but placebo. A fair and science-based reading of the literature clearly shows this. But proponents have been experts and creating this false narrative, and the Washington Post just fell for it.

Posted by Steven Novella