Shares

This is an article I could write on almost any given week – a study reports on the association between meditation and lower cardiovascular risk, which the press glowingly reports as another study showing the benefits of meditation. But on close examination, the study does not show that at all. The disconnect cuts to the heart of what it means to be “science-based”.

We often evaluate individual studies for their scientific qualities – were they well controlled, was the outcome measure objective, was the statistical evaluation valid, and was the outcome clinically meaningful? We can look for signs of p-hacking, or taking liberties with the methods in order to manufacture positive results.

But sometimes the problems with medical research go deeper than flaws in individual studies. Sometimes the entire research paradigm is basically broken. This does not mean every study is bad, but it does mean that most of the research suffers from scientific problems that are baked into the culture of research in a specific area. For example, homeopathy research suffers from rolling the dice with placebo-vs-placebo studies and then reporting chance positive studies, without ever achieving the kind of reproducibility that would indicate a real effect.

Acupuncture research is probably the worst – it suffers from a fundamental problem with the definition of acupuncture. It should be – sticking needles into specific acupuncture points to treat specific problems. However, the research consistently shows that it does not matter where you stick the needles or if you stick the needles at all. Further, the choice of acupuncture points is arbitrary and lacks consistency (which is unsurprising given the infinite variety of styles of acupuncture). And even more damning, the evidence clearly shows that acupuncture points don’t even exist. And yet we have hundreds of studies reporting that “acupuncture works” even for studies that are clearly negative.

The underlying problem with homeopathy and acupuncture research is that most of the researchers are, to put it simply, doing it wrong. They are committing the cardinal sin of pseudoscience – trying to prove their hypothesis correct, rather than trying to prove it incorrect. And so the entire research program is broken. If this problem were fixed, both homeopathy and acupuncture would be declared ineffective and scientific dead ends and would go away.

There is a similar situation with meditation, except that it might be more accurate to say that the meditation research tells us practically nothing, rather than indicating that meditation does not work. As I discussed in 2017, meditation comes in many flavors, but they lack clear operational definitions. This is the first step in any valid science – a clear definition. If you can’t do that, then it is impossible to have meaningful and interpretable research outcomes. As the review I discussed at the time concluded:

However, the aforementioned complexity, confounding, and confusion that surrounds empirical research on “mindfulness” limits the potential of the method to inform broad questions and inform specific theories. The extent to which a specific model is supported or disconfirmed by particular sets of empirical data or systematic observations depends on the meaning of “mindfulness” that inspired data acquisition.

This is a fancy way of saying that we can’t really conclude anything from existing research because the very definition of “mindfulness meditation” is too squirrelly. What about the best studies of meditation – can they tell us anything? The largest systematic review, which James Coyne discussed here in 2019, identified 18,753 citations for preliminary review, but only 47 trials that were of sufficient quality to include in their review. That is a lot of noise for very little signal. They found:

Low evidence of no effect or insufficient evidence of any effect of meditation programs on positive mood, attention, substance use, eating habits, sleep, and weight. No evidence that meditation programs were better than any active treatment (ie, drugs, exercise, and other behavioral therapies).

This conclusion is eerily similar to other programs, like homeopathy and acupuncture, in which the best studies show the intervention probably does not work. However, the vast majority of studies claim the intervention probably does work, but they are of insufficient quality to really draw that conclusion. So the press and the public are left with the opposite conclusion to the scientific reality.

The current study is a great example. The reporting declares, “Meditation linked to lower cardiovascular risk”. That is probably as far as most people will get, or they will read the press release – the first two-thirds of which is also glowing but misleading. If you look at the actual study, however, you see that the data is not only not convincing, it’s actually negative.

First, this is a correlational study only. There is no intervention and control, the authors are just looking at survey information about self-reported habits like exercise, smoking, and meditation. You should always suspect this when the headline uses the word “linked”. The fact that A was linked to B does not mean A causes B, even though that is often the implication given.

But the problems are deeper than that. This is a big survey, with 61,267 NHIS participants. That is both a strength and a weakness. It gives the study a lot of statistical power to find small effects, but it also means that any confounding factors will also likely be highly statistically significant. It’s like looking at an elephant under a microscope – you will see a lot of detail, but may miss the big picture.

In this study the authors controlled for demographic factors, like sex, age, and weight, but did not control for cardiovascular risk factors. This is because they were trying to say that meditation is linked to lower risk factors, implying that it causes lower risk factors. Of course, in the discussion they have to admit that their study does not show this because it is correlational only, but it allows them to say that meditation “may” cause lower risk factors, and that it supports other research that also shows meditation “may” have benefits. This is how a research house of cards gets built on itself.

But a close look at the tables shows how thin this argument is. Most of the factors were highly statistically significant. Those who meditate, for example, were more likely to be married and less likely to be divorced. Do you think that might be a confounding factor in heart disease? They give the game away in their discussion:

On multivariate analysis, after adjusting alcohol consumption, physical activity, exercise, meditation was only significantly associated with a lower prevalence of hypercholesterolemia. Our results may be confounded by alcohol consumption, physical activity, and exercise but we cannot be totally ruled out [sic] that some of these variables may be in the causal pathway making the associations non-significant when adjusting for alcohol consumption, physical activity, exercise in an initial model.

This is what I meant when I said the study was negative. When you control for the obvious confounding factors (while acknowledging there are probably others not captured in the data) all of the significant effects on stroke, heart attack, and other meaningful clinical outcomes go away. There was no independent association between meditation and good clinical outcomes. None. Normally this would be considered a negative study.

But in the broken paradigm of meditation research, negative outcomes are magically positive. The authors rescue their hypothesis (remember, they are trying to prove meditation works) by arguing that we cannot rule out that meditation causes the confounding factors. In other words, meditation may cause people to drink less and exercise more. OK, but then really it is the known healthy lifestyle factors that are causing the improved cardiovascular outcomes (including lower cholesterol), not the meditation itself. The authors have to retreat to this weaker alternate hypothesis that meditation causes better behavior.

But that is not what this study looked at, so it is just a gratuitous hypothesis. We would need to set up a study in which meditation was the intervention with an adequate control group, and exercise and alcohol consumption were the outcomes. Of course this would need to be compared to just counseling people on better lifestyle choices. So now meditation has become a counseling intervention, not a direct health intervention. None of this apparently matters, however, because the goal is to conclude that “meditation works” regardless of the twisting logical path you need to take to get there.

All of this shows how important interpretation of the data is. Data does not exist in a vacuum. If you look at this data through the lens of science-based medicine, it is clearly negative. Any positive association between meditation and clinical outcomes goes away when you include confounding factors. That should have been the end of the discussion.

Shares

Author

  • Founder and currently Executive Editor of Science-Based Medicine Steven Novella, MD is an academic clinical neurologist at the Yale University School of Medicine. He is also the host and producer of the popular weekly science podcast, The Skeptics’ Guide to the Universe, and the author of the NeuroLogicaBlog, a daily blog that covers news and issues in neuroscience, but also general science, scientific skepticism, philosophy of science, critical thinking, and the intersection of science with the media and society. Dr. Novella also has produced two courses with The Great Courses, and published a book on critical thinking - also called The Skeptics Guide to the Universe.

Posted by Steven Novella

Founder and currently Executive Editor of Science-Based Medicine Steven Novella, MD is an academic clinical neurologist at the Yale University School of Medicine. He is also the host and producer of the popular weekly science podcast, The Skeptics’ Guide to the Universe, and the author of the NeuroLogicaBlog, a daily blog that covers news and issues in neuroscience, but also general science, scientific skepticism, philosophy of science, critical thinking, and the intersection of science with the media and society. Dr. Novella also has produced two courses with The Great Courses, and published a book on critical thinking - also called The Skeptics Guide to the Universe.