Shares

acupinhead

Poorly done acupuncture studies are published every week, so I can’t write about every one that comes out. I probably would have passed this one by, except for the New York Times article using it to tout the effectiveness of acupuncture.

The headline reads: “Acupuncture, Real or Not, Eases Side Effects of Cancer Drugs.

I know that authors, in this case Nicholas Bakalar, often do not write their own headlines, but in this case the article itself is just as bad. It begins:

Both acupuncture and sham acupuncture were effective in reducing menopausal symptoms in women being treated with aromatase inhibitors for breast cancer, a small randomized trial found.

This is, in fact, not true, but this fallacy has become the centerpiece deception of acupuncture promotion. See if you can spot the fallacy, but before I discuss it let me review the study itself.

The one thing Bakalar gets right is that it was a small study. The study had two arms, real acupuncture (RA – 23 subjects) and sham acupuncture (SA – 24 subjects). These were all women with breast cancer on an aromatase inhibitor (AI) who were getting musculoskeletal side effects from the medication. They were treated for 8 weeks with either RA or SA. Outcomes included several scales that essentially involve reporting subjective symptoms.

The study found that both groups reported improvement in symptoms with treatment, but there was no statistically significant difference between the two. In the real world we refer to this as a negative study.

Trying to rescue something interesting from the results, the authors also report:

Post-hoc analysis indicated that African American patients (n = 9) benefited more from RA than SA compared with non-African American patients (n = 38) in reducing hot flash severity (P < .001) and frequency (P < .001) scores.

Those P-values are completely worthless, because you cannot interpret P-values of a post-hoc analysis. What we don’t know is how many post-hoc analyses were done, and there does not seem to be any correction for multiple comparisons. There are many possible subgroups that could have been pulled out – by age, weight, and severity of symptoms, for example. There were also multiple possible comparisons as multiple subjective outcomes were measured at different time points. The fact that one subgroup had an outlier result is meaningless. Also, this study is far too small for such subgroup analysis – only 9 subjects in the African American group.

I think it was deceptive to even report these results with those P-values. The authors conclude from this:

Racial differences in response to acupuncture warrant further study.

Even that tepid conclusion (probably all they could get away with in peer-review) is not justified by this data, in my opinion. It seems like they wanted to show some positive trend to make it seem like acupuncture has promise.

The only justified conclusion one can make from this study is that it failed to show any effect from acupuncture for treating AI side effects – this is a negative study. The authors conclude, however:

Both RA and SA were associated with improvement in PROs among patients with breast cancer who were receiving AIs, and no significant difference was detected between arms.

While accurate, it is misleading, which gets back to the core fallacy of recent acupuncture promotion. When the treatment and control groups, in a blinded comparison, show no difference, the only conclusion to draw from that is that the treatment had no measurable effect in that study (in plain terms, the treatment did not work).

Stating that both the treatment and the control showed improvement is a non-sequitur in the context of a controlled clinical trial. The comparison of baseline symptoms to post treatment (real or sham) is unblinded, and therefore it is not possible to make any efficacy claims from that change alone, especially with subjective outcomes. Comparison to baseline or historical controls is only justified with objective outcomes (like death). Even then the data is suspect, unless there are blinded controls.

Interestingly, the authors provide a working definition of acupuncture in the introduction to the published study:

Acupuncture is a traditional Chinese medicine technique that involves inserting filiform stainless steel needles into specific points in the body to achieve therapeutic effect.

Acupuncture is sticking needles into specific points – those are the two variables that define acupuncture. If the specific points do not matter (sham acupuncture is sticking needles in the “wrong” points) then acupuncture, as defined, does not work.

The New York Times article reports:

The results may be attributable to a placebo effect, but the scientists suggest that the slight pricking of the skin could cause physiological changes. In any case, the lead author, Dr. Ting Bao, a medical oncologist at the University of Maryland, Baltimore, said there is no harm in trying acupuncture.

The results are attributable to placebo effects, by definition – there was no difference between the treatment and placebo groups. The authors (still trying to put a positive spin on their negative data) suggest that perhaps it is the sticking of needles that works, and not the specific points. However, their study did not control for the sticking of needles, so it is disingenuous to speculate about this variable.

We do have acupuncture studies, however, that control for the sticking of needles – so-called placebo acupuncture where the skin is poked with toothpicks or a dull needle and there is no penetration. Most famously is this study of acupuncture for back pain, which used the toothpick control.

What the evidence shows is that poking the skin randomly with tooth picks is as effective as traditional acupuncture performed by a trained acupuncturist. But “as effective” could mean not effective at all, because all we have are unblinded comparisons to no treatment for subjective outcomes.

Dr. Bao also makes the “what’s the harm” fallacy. The study was negative. Stating that patients should try the treatment anyway is not justified. It is also untrue that there is no possible harm. There is harm in wasting time and resources on a treatment that demonstrably does not work. Promoting implausible and ineffective treatments also instills health misinformation in the public. Further, acupuncture is not without risks. Even if the risks are small, they are not justified when the evidence indicates no benefit.

The NYT article ends with this further quote from Dr. Bao:

“Acupuncture as a medical procedure has been practiced for thousands of years,” she said. “It has a minimal risk and potentially significant benefits.”

This is propaganda, not science. Acupuncture, as it is practiced today, is a fairly recent invention of the early 20th century. What was practiced for thousands of years bears more of a resemblance to bloodletting than modern acupuncture. In any case, this is nothing but the argument from antiquity logical fallacy. Culture and mechanisms of deception can propagate ineffective treatments for thousands of years.

Saying that a treatment has “potentially significant benefits” is unjustified opinion, and is especially odd coming from a scientist who just published a completely negative study showing the treatment is ineffective.

In fact there have been several thousand acupuncture studies over decades. After all of this clinical research, acupuncture has not been clearly demonstrated to be effective for any indication. In short, acupuncture does not work. It is too late to talk about acupuncture’s “potential,” as if we just need to study it more. It has been studied. It doesn’t work.

Proponents, however, will continue to publish poorly conducted studies where biases and degrees of freedom can generate positive results, and more rigorous studies with negative results that they will promote nonetheless as if they were positive.

For acupuncture true believers, acupuncture research is a “heads I win, tails I win” situation.

Shares

Author

  • Steven Novella

    Founder and currently Executive Editor of Science-Based Medicine Steven Novella, MD is an academic clinical neurologist at the Yale University School of Medicine. He is also the host and producer of the popular weekly science podcast, The Skeptics’ Guide to the Universe, and the author of the NeuroLogicaBlog, a daily blog that covers news and issues in neuroscience, but also general science, scientific skepticism, philosophy of science, critical thinking, and the intersection of science with the media and society. Dr. Novella also has produced two courses with The Great Courses, and published a book on critical thinking - also called The Skeptics Guide to the Universe.

    View all posts

Posted by Steven Novella

Founder and currently Executive Editor of Science-Based Medicine Steven Novella, MD is an academic clinical neurologist at the Yale University School of Medicine. He is also the host and producer of the popular weekly science podcast, The Skeptics’ Guide to the Universe, and the author of the NeuroLogicaBlog, a daily blog that covers news and issues in neuroscience, but also general science, scientific skepticism, philosophy of science, critical thinking, and the intersection of science with the media and society. Dr. Novella also has produced two courses with The Great Courses, and published a book on critical thinking - also called The Skeptics Guide to the Universe.