Attacks on science-based medicine (SBM) come in many forms. There are the loony forms that we see daily from the anti-vaccine movement, quackery promoters like Mike Adams and Joe Mercola, those who engage in “quackademic medicine,” and postmodernists who view science as “just another narrative,” as valid as any other or even view science- and evidence-based medicine as “microfascism.” Sometimes, these complaints come from self-proclaimed champions of evidence-based medicine (EBM) who, their self-characterization otherwise, show signs of having a bit of a soft spot for the ol’ woo. Then sometimes there are thoughtful, serious criticisms of some of the assumptions that underlie SBM.
The criticism I am about to address tries to be one of these but ultimately fails because it attacks a straw man version of SBM.
True, the criticism of SBM I’m about to address does come from someone named Steve Simon, who vocally supports EBM but doesn’t like the the criticism of EBM implicit in the very creation of the concept of SBM. Simon has even written a very good deconstruction of postmodern attacks on evidence-based medicine (EBM) himself, as well as quite a few other good discussions of medicine and statistics. Unfortunately, in his criticism, Simon appears to have completely missed the point about the difference between SBM and EBM. As a result, his criticisms of SBM wind up being mostly the application of a flamethrower to a Burning Man-sized straw man representing what he thinks SBM to be. It makes for a fun fireworks show but is ultimately misdirected, a lot of heat but little light. For a bit of background, Simon’s post first piqued my curiosity because of its title, Is there something better than Evidence Based Medicine out there? The other reason that it caught my attention was the extreme naiveté revealed in the arguments used. In fact, Simon’s naiveté reminds me very much of my very own naiveté about three years ago.
Here’s the point where I tell you a secret about the very creation of this blog. Shortly after Steve Novella invited me to join, the founding members of SBM engaged in several e-mail frank and free-wheeling exchanges about what the blog should be like, what topics we wanted to cover, and what our philosophy should be. One of these exchanges was about the very nature of SBM and how it is distinguished from EBM, the latter of which I viewed as the best way to practice medicine. During that exchange, I made arguments that, in retrospect, were eerily similar to the ones by Simon that I’m about to address right now. Oh, how epic these arguments were! In retrospect, I can but shake my head at my own extreme naiveté, which I now see mirrored in Simon’s criticism of SBM. Yes, I was converted, so to speak (if you’ll forgive the religious terminology), which is why I see in Simon’s article a lot of my former self, at least in terms of how I used to view evidence in medicine.
The main gist of Simon’s complaint comes right at the beginning of his article:
Someone asked me about a claim made on an interesting blog, Science Based Medicine. The blog claims that Science Based Medicine (SBM), that tries to draw a distinction between that practice and Evidence Based Medicine (EBM). SBM is better because “EBM, in a nutshell, ignores prior probability (unless there is no other available evidence and falls for the p-value fallacy; SBM does not.” Here’s what I wrote.
No. The gist of the science based medicine blog appears to be that we should not encourage research into medical therapies that have no plausible scientific mechanism. That’s quite a different message, in my opinion, that the message promoted by the p-value fallacy article by Goodman.
First off, Simon’s complaint makes me wonder if he actually read Dr. Atwood’s entire post. To show you what I mean, I present here the whole quote from Dr. Atwood in context:
EBM, in a nutshell, ignores prior probability† (unless there is no other available evidence) and falls for the “p-value fallacy”; SBM does not. Please don’t bicker about this if you haven’t read the links above and some of their own references, particularly the EBM Levels of Evidence scheme and two articles by Steven Goodman (here and here). Also, note that it is not necessary to agree with Ioannidis that “most published research findings are false” to agree with his assertion, quoted above, about what determines the probability that a research finding is true.
Simon, unfortunately, decides to bicker. In doing so, he builds a massive straw man. I’m going to jump ahead to the passage the most reveals Simon’s extreme naiveté:
No thoughtful practitioner of EBM, to my knowledge, has suggested that EBM ignore scientific mechanisms.
Talk about a “no true Scotsman” fallacy!
You know, about three years ago I can recall writing almost exactly the same thing in the aforementioned epic e-mail exchange arguing the very nature of EBM versus SBM. The problem, of course, is not that EBM completely ignores scientific mechanisms. That’s every bit as much of a straw man characterization of SBM as the characterization that Simon skewered of EBM being only about randomized clinical trials (RCTs). The problem with EBM is, rather, that it ranks basic science principles as being on either very lowest rung or the second lowest rung on the various hierarchies of evidence that EBM promulgates as the way to evaluate the reliability of scientific evidence to be used in deciding which therapies work. The most well-known of these is the that published by the Centre for Evidence-Based Medicine, but there are others. Eddie Lang, for instance, places basic research second from the bottom, just above anecdotal clinical experience of the sort favored by Dr. Jay Gordon (see Figure 2). Duke University doesn’t even really mention basic science; rather it appears to lump it together at the very bottom of the evidence pyramid under “background information.” When I first started to appreciate the difference between EBM and SBM, I basically had to be dragged, kicking and screaming, by Steve and Kimball, to look at these charts and realize that, yes, in the formal hierarchies of evidence used by the major centers for EBM, basic science and plausible scientific mechanisms do rank at or near the bottom. I didn’t want to accept that it was true. I really didn’t. I didn’t want to believe that SBM is not synonymous with EBM, which would be as it should be in an ideal world. Simon apparently doesn’t either:
Everybody seems to criticize EBM for an exclusive reliance on randomized clinical trials (RCTs). The blog uses the term “methodolatry” in this context. A group of nurses who advocate a post-modern philosophical approach to medical care also criticized EBM and used an even stronger term, micro-fascism, to describe the tendency of EBM to rely exclusively on RCTs.
But I have not seen any serious evidence of EBM relying exclusively on RCTs. That’s certainly not what David Sackett was proposing in the 1996 BMJ editorial “Evidence based medicine: what it is and what it isn’t”. Trish Greenhalgh elaborates on quite clearly in her book “How to Read a Paper: The Basics of Evidence Based Medicine” that EBM is much more than relying on the best clinical trial. There is, perhaps, too great a tendency for EBM proponents to rely on checklists, but that is an understandable and forgivable excess.
I must to admit to considerable puzzlement here. EBM lists randomized clinical trials (RCTs) and meta-analyses or systematic reviews of RCTs as being the highest form of evidence, yet Simon says he sees no serious evidence of EBM relying exclusively on RCTs. I suppose that’s true in a trivial sort of way, given that there are conditions and questions for which there are few or no good RCTs. When that is the case, one has no option but to rely on “lower” forms of evidence. However, the impetus behind EBM is to use RCTs wherever possible in order to decide which therapies are best. If that weren’t true, why elevate RCTs to the very top of the evidence hierarchy? Simon is basically misstating the the complaint anyway. We do not criticize EBM for an “exclusive” reliance on RCTs but rather for an overreliance on RCTs devoid of scientific context.
Simon then decides to try to turn the charge of “methodolatry,” or as revere once famously called it, the profane worship of the randomized clinical trial as the only valid method of investigation, against us.This misinterpretation of what SBM is leads Simon, after having accused SBM of leveling straw man attacks against EBM, to building up that aforementioned Burning Man-sized straw man himself, which he then begins to light on fire with gusto:
I would argue further that it is a form of methodolatry to insist on a plausible scientific mechanism as a pre-requisite for ANY research for a medical intervention. It should be a strong consideration, but we need to remember that many medical discoveries preceded the identification of a plausible scientific mechanism.
While this is mostly true, one might point out that, once the mechanisms behind such discoveries were identified, all of them had a degree of plausibility in that they did not require the overthrow of huge swaths of well-settled science in order to be accepted as valid. Let’s take the example of homeopathy. I use homeopathy a lot because it is, quite literally, water and because its proposed mechanism of action goes against huge swaths of science that has been well-characterized for centuries. I’m not just talking one scientific discipline, either. For homeopathy to be true, much of what we currently understand about physics, chemistry, and biology would have to be, as I am wont to say, not just wrong, but spectacularly wrong. That is more than just lacking prior plausibility. It’s about as close to being impossible as one can imagine in science. Now, I suppose there is a possibility that scientists could be spectacularly wrong about so much settled science at once. If they are, however, it would take compelling evidence on the order of the mass of evidence that supports the impossibility of homeopathy to make that possibility worth taking seriously. Extraordinary claims require extraordinary evidence. RCTs showing barely statistically significant effects do not constitute extraordinary evidence, given that chance alone will guarantee that some RCTs will be positive even in the absence of an effect and the biases and deficiencies even in RCTs. Kimball explains this concept quite well:
When this sort of evidence [the abundant basic science evidence demonstrating homeopathy to be incredibly implausible] is weighed against the equivocal clinical trial literature, it is abundantly clear that homeopathic “remedies” have no specific, biological effects. Yet EBM relegates such evidence to “Level 5”: the lowest in the scheme. How persuasive is the evidence that EBM dismisses? The “infinitesimals” claim alone is the equivalent of a proposal for a perpetual motion machine. The same medical academics who call for more studies of homeopathy would be embarrassed, one hopes, to be found insisting upon “studies” of perpetual motion machines. Basic chemistry is still a prerequisite for medical school, as far as I’m aware.
Yes, Simon is indeed tearing down a straw man. As Kimball himself would no doubt agree, even the most hardcore SBM aficianado does not insist on a plausible scientific mechanism as a “pre-requisite” for “ANY” research, as Simon claims. Rather, what we insist on is that the range of potential mechanisms proposed do not require breaking the laws of physics or that there be highly compelling evidence that the therapy under study actually has some sort of effect sufficient to make us doubt our understanding of the biology involved.
Simon then appeals to there being some sort of “societal value” to test interventions that are widely used in society even when those interventions have no plausible mechanism. I might agree with him, except for two considerations. First, no amount of studies will convince, for example, homeopaths that homeopathy doesn’t work. Witness Dana Ullman if you don’t believe me. Second, research funds are scarce and likely to become even more so over the next few years. From a societal perspective, it’s very hard to justify allocating scarce research dollars to the study of incredibly implausible therapies like homeopathy, reiki, or therapeutic touch. (After all, reiki is nothing more than faith healing based on Eastern mystic religious beliefs rather than Christianity.) Given that, for the foreseeable future, research funding will be a zero sum game, it would be incredibly irresponsible to allocate funds to studies of magic and fairy dust like homeopathy, knowing that those are funds that won’t be going to treatment modalities that might actually work.
When it all comes down to it, I think that Simon is, as I was, in denial. When confronted with the whole concept of SBM compared to EBM, I denied what I didn’t want to believe. To me, it seemed so utterly obvious that the scientific plausibility of the hypothesis under study has to be taken into account in evaluating the evidence. I just couldn’t imagine that any system of evaluating evidence could be otherwise; it made no sense to me. So I imposed this common-sense view on EBM, and I rather suspect that many other advocates of EBM like Simon labor under the same delusion I did. The problem is, though, that critics of EBM are basically correct on this score. Still, realizing it or admitting it did not come easy. For me to accept that EBM had a blind spot when it came to basic science, it took having my face rubbed in unethical and scientifically dubious trials like that of the Gonzalez therapy for pancreatic cancer or chelation therapy for cardiovascular disease. Let’s put it this way. To be willing to waste money studying something that is nothing but water and has as its “scientific basis” a hypothesis that is the equivalent of claiming that a perpetual motion machine can be constructed tells me that basic science basically means close to nothing. Ditto wasting money on studying a therapy whose major component is coffee enemas used to treat a deadly cancer. Simon cheekily suggests at the end of his post that “maybe we should distinguish between EBM and PIEBM (poorly Implemented Evidence Based Medicine). The problem is, trials of therapies like the Gonzalez regimen, homeopathy, and reiki are a feature of, not a bug in EBM. In fact, I challenge Simon to provide a rationale under EBM as it is currently constituted to justify not having to do a clinical of these therapies. There is none.
I realize that others have said it before here (and probably said it better than I), but we at SBM are not hostile to EBM at all. Rather, we view EBM as incomplete, a subset of SBM. It’s also too easily corrupted to provide an air of scientific legitimacy to fairy dust like homeopathy and reiki. These problems, we argue, can be ameliorated by expanding EBM into SBM. Personally, I suspect that the originators of EBM, as I do (and, I suspect, Simon does), never thought of the possibility of EBM being applied to hypotheses as awe-inspiringly implausible as those of CAM. It simply never occurred to them; they probably assumed that any hypothesis that reaches a clinical trial stage must have good preclinical (i.e., basic science) evidence to support its efficacy. But we know now that this isn’t the case. I can’t speak for everyone else here, but after agreeing with Kimball that EBM ought to be synonymous with SBM I also express the hope that one day there will be no distinction between SBM and EBM. Unfortunately, we aren’t there yet.
NOTE: There will be one more post later today; so don’t go away just yet.