Shares

For the past 17 years Edge magazine has put an interesting question to a group of people they consider to be smart public intellectuals. This year’s question is: What Scientific Idea is Ready for Retirement? Several of the answers display, in my opinion, a hostility toward science itself. Two in particular aim their sights at science in medicine, the first by Dean Ornish, who takes issue with large randomized controlled clinical trials, and the second by Gary Klein, who has a beef with evidence-based medicine.

These responses do not come out of nowhere. The “alternative medicine” meme that has taken hold in the last few decades (a triumph of slick marketing over reason) is all about creating a double standard. There is regular medicine which needs to justify itself with rigorous science, and then there is alternative medicine, where the rules of evidence bend to the needs of the guru or snake oil salesperson.

We have been hearing arguments from alternative medicine proponents for years now for why the strict rules of science need to be relaxed or expanded. Andrew Weil has advocated for the use of “uncontrolled clinical observations,” (also known as anecdotes). David Katz advocates for a “more fluid concept of evidence.” Dr. Oz went as far as advocating outright medical relativism, saying. “You find the arguments that support your data, and it’s my fact versus your fact.”

Dean Ornish

I can now add Dean Ornish to the list of gurus who want to change the rules of evidence to suit their needs. He is the founder and president of the Preventive Medicine Research Institute, and a long-time advocate of alternative medicine.

His answer to the question of what concept needs to be ditched from science is the large randomized controlled trial. He writes:

It is a commonly held but erroneous belief that a larger study is always more rigorous or definitive than a smaller one, and a randomized controlled trial is always the gold standard . However, there is a growing awareness that size does not always matter and a randomized controlled trial may introduce its own biases. We need more creative experimental designs.

He then launches into a discussion of the potential weakness and biases inherent in doing large trials. His specific observations are mostly valid, it is the conclusions he draws that are suspect.

Ornish is partly committing the Nirvana fallacy, or making the perfect the enemy of the good. No scientific study is perfect. Large clinical trials are very difficult to run, and compromises are often made for practical reasons. There are limited resources, and when studying people you have to consider the inconvenience to the study subjects.

The goal is to design the best study possible, not necessarily the perfect study. Further, multiple studies are often necessary, making different compromises, so that the strengths and weakness of the various trials will complement each other. In the end we grind our way slowly to a reliable answer.

It is easy, however, to point at the limitations and conclude that science is hopeless (the nihilistic approach) or that anything goes (the alternative medicine approach).

Ornish is claiming that smaller studies can be more rigorous because you can dedicate more time and resources to each subject. Really large trials compromise by spending fewer resources on each subject.

But, Ornish glosses over the fact that with smaller studies you sacrifice statistical power. It is odd to argue that smaller studies are better, or that we should abandon the large clinical trial. A more reasonable conclusion is that, with each trial, we make the compromises that make the most sense.

In general studies are powered just enough to be able to demonstrate the anticipated or likely effect size. Overpowering a study is a bad idea, but so is underpowering a study.

Further, small detailed studies can be complementary to larger simpler trials. So Ornish is also making a false choice fallacy.

Ornish’s odd recommendations perhaps make sense in light of the example he presents:

That’s just what happened in the Women’s Health Initiative study, which followed nearly 49,000 middle-aged women for more than eight years.

However, the experimental group participants did not reduce their dietary fat as recommended—over 29 percent of their diet was comprised of fat, not the study’s goal of less than 20 percent.

It seems he is whining a bit about a large clinical trial that did not show the results he liked. It’s OK to point out the weaknesses in the trial (although I think Ornish protests too much). It’s a radical overreaction to recommend ditching large clinical trials.

Gary Klein

It’s rare to see a logical fallacy stated so overtly. Klein could not have crafted a better example of the Nirvana fallacy if he tried:

But we should only trust EBM if the science behind best practices is infallible and comprehensive, and that’s certainly not the case. Medical science is not infallible. Practitioners shouldn’t believe a published study just because it meets the criteria of randomized controlled trial design. Too many of these studies cannot be replicated. Sometimes the researcher got lucky and the experiments that failed to replicate the finding never got published or even submitted to a journal (the so-called publication bias). In rare cases the researcher has faked the results. Even when the results can be replicated they shouldn’t automatically be believed—conditions may have been set up in a way that misses the phenomenon of interest so a negative finding doesn’t necessarily rule out an effect.

Really – unless science is infallible and comprehensive, we should ditch it? Unless we have perfect knowledge of everything we should behave as if we know nothing?

This attitude is not new. It is common in the alternative world. It just usually isn’t stated so boldly.

Again, Klein points out legitimate problems with the institution of science in general, and evidence-based medicine in particular. Yes – there are biases, there are publication issues and failure to replicate. We spend a great deal of time on SBM pointing out and discussing all the various challenges to rigorous science.

Klein and others, however, want to throw the baby out with the bathwater – to ditch scientific evidence, rather than work toward improving it. All of the problems with science in medicine have potential solutions, and we are making progress.

Klein also falls for a very common myth about EBM:

EBM formulates best practices for general populations but practitioners treat individuals, and need to take individual differences into account.

Here he clearly demonstrates that he is not familiar with EBM (and therefore is not in a position to recommend its demise). EBM absolutely recognizes that evidence needs to be applied to individual patients, and that practitioners need to combine the best evidence with their own clinical experience and judgments. This is nothing but a false accusation based upon ignorance of EBM.

Klein goes further, saying that advances in surgical techniques do not need placebo controlled trials, therefore we don’t really need placebo-controlled studies.

He then concludes:

Worse, reliance on EBM can impede scientific progress. If hospitals and insurance companies mandate EBM, backed up by the threat of lawsuits if adverse outcomes are accompanied by any departure from best practices, physicians will become reluctant to try alternative treatment strategies that have not yet been evaluated using randomized controlled trials. Scientific advancement can become stifled if front-line physicians, who blend medical expertise with respect for research, are prevented from exploration and are discouraged from making discoveries.

This is a profoundly naïve position. Preventing practitioners from essentially experimenting in an uncontrolled way on their patients is a good thing. Best practices and the standard of care exist for a reason – and they are not only based upon the best evidence, but also expert analysis and experience.

Adherence to best practices strongly correlates with better outcomes. Over-reliance on experience and judgement in deciding on treatments can be counterproductive.

Further, experimenting needs to be done within a strict ethical and scientific framework. You cannot, for example, ditch the standard of care in order to go exploring.

Conclusion

Doing clinical science is complex and difficult. The institutions of science are also imperfect and need continuous revisions.

There is a disturbing tendency to point out the challenges of rigorous science as a means of arguing for the abandonment or loosening of the rules of science and evidence.

This very approach is a logical fallacy, however. Limitations with science do not imply that anything works or is better. Further, science may have challenges but it is still the best game in town. Its problems are not insurmountable – they can be improved or fixed.

For example, journals can give more space to publishing replications, human trials should be increasingly registered so that negative studies cannot be hidden, and we need to lower the “publish or perish” pressure in academia to help encourage fewer more rigorous trials and reduce the flood of preliminary studies that are unreliable.

There are plenty of thoughtful solutions. Abandoning rigorous evidence is not one of them.

Shares

Author

  • Founder and currently Executive Editor of Science-Based Medicine Steven Novella, MD is an academic clinical neurologist at the Yale University School of Medicine. He is also the host and producer of the popular weekly science podcast, The Skeptics’ Guide to the Universe, and the author of the NeuroLogicaBlog, a daily blog that covers news and issues in neuroscience, but also general science, scientific skepticism, philosophy of science, critical thinking, and the intersection of science with the media and society. Dr. Novella also has produced two courses with The Great Courses, and published a book on critical thinking - also called The Skeptics Guide to the Universe.

Posted by Steven Novella

Founder and currently Executive Editor of Science-Based Medicine Steven Novella, MD is an academic clinical neurologist at the Yale University School of Medicine. He is also the host and producer of the popular weekly science podcast, The Skeptics’ Guide to the Universe, and the author of the NeuroLogicaBlog, a daily blog that covers news and issues in neuroscience, but also general science, scientific skepticism, philosophy of science, critical thinking, and the intersection of science with the media and society. Dr. Novella also has produced two courses with The Great Courses, and published a book on critical thinking - also called The Skeptics Guide to the Universe.