Trojan rabbit Monty Python

A few months ago, Steve Novella and I published an article in Trends in Molecular Medicine entitled “Clinical trials of integrative medicine: testing whether magic works?” It was our first foray together into publishing commentary about science-based medicine versus evidence-based medicine, using a topic that we’ve both written extensively about over the years on this blog and our respective personal blogs. Specifically, we discussed whether it is worthwhile to do randomized clinical trials (RCTs) testing highly improbable treatments, such as reiki and homeopathy, both of which have no physical basis to believe that they do anything whatsoever. As I’ve said many times before, reiki is simply faith healing in which Eastern mysticism is substituted for Christian beliefs, and homeopathy, as we’ve discussed many times here on SBM, is vitalistic sympathetic magic with no evidence to support its two laws.

To our surprise, that article generated a fair amount of press (for example this), with accounts of it showing up in the media in various places and Steven and I being asked to do a fair number of interviews. Part of the reason, I suspect, is that the editor made the article available for free for a month after its initial publication. (Unfortunately it’s back behind the pay wall again.) Part of the reason is that, intuitively, it makes sense to people not to waste money testing what is, at its core, magic. When I followed up that publication with an article criticizing “integrative oncology” in Nature Reviews Cancer entitled “Integrative oncology: Really the best of both worlds?“, the target was well and truly on my back. Indeed, let’s just say that the Society for Integrative Oncology and the Consortium of Academic Health Centers for Integrative Medicine (CAHCIM) are quite unhappy with me. When both their letters to the editor are published (right now, only one is), I might even blog about them.

In the meantime, I want to deal with criticism published in an unexpected place, albeit not by unexpected critics. The reason is that this criticism relies on a common straw man caricature of what we are saying when we advocate science-based medicine (SBM) that considers prior plausibility in determining what modalities to test in clinical trials and understands Bayesian thinking in which prior plausibility affects posterior plausibility that a “significant” result is not a false positive in contrast to the current evidence-based medicine (EBM) paradigm, which relegates basic science knowledge, even well-established principles of science that show that something like, say, homeopathy or reiki is impossible under the current understanding of physics, chemistry and biology, to the lowest rung on the EBM pyramid. It’s also a criticism that comes up frequently enough that, even though it’s been addressed before in various ways by various SBM bloggers, it’s worth revisiting from time to time. In this case, that’s particularly so because one of the two critics taking Steve and me to task is currently embroiled in a controversy about testing homeopathy for attention deficit hyperactivity disorder (ADHD) at the University of Toronto (more details on that later). Let’s just say, the criticism of Steve and me gives me an “in” to address a story that I thought had passed me by, and I intend to take it.

Steve and I are criticized as unscientific

I mentioned that I was surprised about where this criticism of Steve and me was published; the reason for my surprise is that it was published in Focus on Alternative and Complementary Therapies (FACT), a journal edited by Edzard Ernst. This is usually a journal that is—shall we say?—not particularly hospitable to arguments like this. Not this time. Witness the article by Sunita Vohra and Heather Boon entitled “Advancing knowledge requires both clinical and basic research“. The criticism begins thusly:

Recently, Gorki [sic] and Novella [1] recommended an approach that would severely curtail one’s ability to explore novel therapies and gain new understanding about human biology. They propose that ‘science-based medicine’ replace the modern standard of ‘evidence-based medicine’ and be used to limit clinical investigation to only those therapies that have positive basic science studies.[1] This approach is a cause for concern as it is predicated on an assumption that we already understand how the world and, in particular, the human body, works. This seems to fly in the face of the basic scientific method, which starts with an intriguing observation and encourages one to ask ‘why did that happen?’ The point of science is to explore things that challenge our understanding and current paradigms of how things work. Theories need to be revised in light of available data, rather than be used to curtail the kinds of questions one can ask.

I had wondered why this article had not popped up on my Google Alerts feed before given that, in order to monitor what people are saying about me and where I’m being mentioned, I maintain such a Google Alert on my name and that the article was first published online a month and a half ago. You’ve probably heard the famous adage, “I don’t care what the newspapers say about me as long as they spell my name right.” Well, Boon and Vohra didn’t even do me the courtesy of spelling my name right, which, fallible human that I am, I must admit annoys me. What annoys me a lot more, however, is the misstatement of our position. It is, I note, a very common straw man of our position, which Vohra and Boon double down on immediately after that passage:

The arguments put forth by Gorki [sic] and Novella[1] are based on the premise that our current understanding of basic human biology is sufficient to predict which therapies will work in humans. Interestingly, the hypothetical scenario used by these authors to help the reader understand the magnitude of the problem is the same strategy used by those trying to explain how little basic science research informs clinical therapies.[2] Unfortunately, it has become well known that basic science may or may not inform how therapies work in patients, more often ‘may not’.[3, 4] This is the reason given by pharmaceutical companies to justify why prescription medicines are so expensive.[5, 6] Since so many biologically plausible compounds ultimately fail to turn into viable therapies, the very few drugs that make it to market need to be priced in such a way as to compensate for all those that failed despite promising (and biologically plausible) laboratory findings.

Have Steve and I heard this argument before many times? Why yes. Yes we have. It’s a straw man so massive that, were it to be set on fire, it would be easily visible from the International Space Station. Yet proponents of unscientific medicine can’t seem to resist trotting it out, tarting it up, and sending it out to misrepresent our position. No one, least of all Steve and I, argues that we understand enough about basic human biology to say that knowledge is sufficient to predict what therapies will work in humans. What we did argue is that there is more than sufficient evidence from basic science, including physics, chemistry, and, yes, physiology to know that modalities like homeopathy and reiki can’t work. Let me just quote from our article for your edification. Because it’s behind a paywall, I will quote fairly liberally:

It should also be noted that ‘biologically plausible’ does not mean ‘knowing the exact mechanism’. What it does mean is that the mechanism should not be so scientifically implausible as to be reasonably considered impossible. In other words, the mechanism should not violate laws and theories in science that rest on far sturdier and longer established foundations than imperfect, bias-prone clinical trials. For example, homeopathy violates multiple laws of physics with its claims that dilution can make a homeopathic remedy stronger and that water can retain the ‘memory’ of substances with which it has been in contact before [9]. Thus, treatments like homeopathy should be dismissed as ineffective on basic scientific grounds alone. That is why we propose the term science-based medicine (SBM) as opposed to evidence-based medicine (EBM). SBM restores basic science considerations to EBM and is what EBM should be.

Later in the article, Steve and I argue:

Another pernicious effect of performing RCTs on CAM and IM modalities is that it leads to clinical trials of highly implausible treatments that may have a significant potential to cause active harm, as opposed to harm from substituting ineffective for effective treatment. One example is the Trial To Assess Chelation Therapy (TACT), in which chelation therapy was tested as a treatment in patients with heart disease: a US$30 million multicenter clinical trial initially funded by the National Center for Complementary and Alternative Medicine (NCCAM) with virtually no preclinical scientific basis, study sites at ‘alternative’ clinics where reliability was seriously questioned, and the potential for complications due to chelation of critical minerals [10]. A decade and tens of millions of dollars later, after failure to reach accrual goals, TACT results were reported [11,12] and were negative except for one subgroup (diabetics), for which there was ample reason to question the validity of the results [13]. Another example is the trial to test an ‘alternative’ treatment regimen for pancreatic cancer that involves extreme dietary modifications, juices, large quantities of supplements, and coffee enemas. After several years, abandonment of the RCT format for an unblinded ‘patient’s choice’ design, and considerable controversy over delays in publication, the results, when finally reported [14], were disturbing. One year survival of subjects undergoing this protocol was nearly fourfold worse than subjects receiving standard-of-care chemotherapy and worse than expected based on historical controls, all associated with poorer quality of life.

So it is very clear that we were not generalizing our arguments to claim that we know enough about human physiology to know in advance which treatments will work. That is a misstatement of our position that completely misses the point. Our argument says nothing about which treatments will work; it emphasizes predicting which treatments will not work, treatments for which doing clinical trials is a pointless waste of money that potentially endangers human subjects and whose results never convince proponents of alternative medicine to abandon treatments that don’t work. Our argument is that there are some modalities that are so inherently ridiculous, so completely without even a modicum of support, from a scientific standpoint that we already have enough data and knowledge to know that they can’t work (e.g., homeopathy and reiki, not to mention pretty much all “energy medicine”). Moreover, in specific cases that we enumerated testing such modalities can be viewed as unethical, as in the TACT (which we’ve discussed here at SBM many times before), trials of homeopathy for diarrheal diseases in Third World countries (also discussed here multiple times before); and testing a treatment like the Gonzalez protocol for pancreatic cancer (yes, also discussed multiple times here before).

The second part of our argument is that EBM was “blindsided” by CAM and as a result EBM has a “scientific blind spot” because an unspoken assumption behind the EBM evidence pyramid is that treatments do not make it to the stage of RCTs without having demonstrated compelling preclinical evidence of efficacy. Yes, we acknowledge that such evidence isn’t enough to predict which treatments will work, but, again, we never argued that it was. What we argue is that we already know enough about pseudoscientific modalities like homeopathy to know they can’t work. As Kimball Atwood put it, basic science or preliminary clinical studies provide evidence sufficient to refute some health claims (e.g., homeopathy and Laetrile), particularly those emanating from the social movement known by the euphemism “CAM.” Further, Atwood argued that EBM’s founders understood the correct role of the rigorous randomized controlled clinical trial: to be the final arbiter of any claim that had already demonstrated promise by all other criteria, including ‘basic science, animal studies, legitimate case series, small controlled trials, “expert opinion,” whatever (but not inexpert opinion).’ As he argued, EBM’s founders knew that such pieces of evidence, promising though they may be, are insufficient because they “routinely lead to false positive conclusions about efficacy.”

I often add to Dr. Atwood’s discussion an additional point emphasizing that determining the exact level of prior plausibility necessary to justify a RCT is a legitimate question. A 50% chance of a positive result? 10%? 1%? Less, even? Very likely the prior plausibility of all “complementary and alternative medicine” (CAM) treatments that are not supplements or herbal medicines falls into that very low range. The answer to that question will actually vary by disease and treatment, not to mention the expense of doing the trial. Thus, it comes down to more or less a value judgment in which preclinical estimates of prior probability influence the decision whether or not to proceed with a clinical trial, but factors of ethics, resources, and urgency of the question, play prominent roles in the decision as well. What I would hope that we all can agree on is that we shouldn’t be subjecting human subjects to tests of therapies whose prior plausibility is so low as to be indistinguishable from zero. Certainly, homeopathy, healing touch, and reiki, among others, fit this criterion quite well. Again, no one is arguing that basic science considerations can predict which therapies will work, although we do argue that in some cases basic science considerations can predict accurately which therapies will not work. I only repeat this again because it’s a message that critics like Vohra and Boon seem willfully unable to grasp and particularly prone to make a flaming straw man out of.

Now, remember the point we at SBM have been making about CAM advocates opting for less rigorous trials when more rigorous ones don’t give them the answers they want? Vohra and Boon demonstrate this very well:

Best available data suggest the majority of patients use complementary therapies.[11-13] Evidence about which therapies may be helpful, and which may be harmful, in whom, and why, is urgently needed to guide practice and policy. Randomised controlled trials have become the gold standard for evaluating treatment effectiveness, but they are expensive and resource-intensive. Rather than arguing that gathering clinical evidence about popular therapies is a folly, we suggest consideration of other kinds of innovative trial design, such as N-of-1 trials.[14] These trials can not only inform the care of a single patient in a cost-effective evidence-based manner, they may be aggregated to inform the care of a population.[15] Comparative effectiveness trials [16] or patient-centred effectiveness research are approaches that help to determine which interventions work best for specific patients in specific circumstances. A key component of this kind of research is the concept of pragmatic clinical trials that focus on assessing effectiveness (use of therapies in real-world settings) as opposed to efficacy[17] – the goal of the traditional RCT. There are an increasing number of rigorous study designs that should be used more to generate clinically useful knowledge and to highlight where it would be most useful to invest in additional mechanistic studies.

Not how Bohra and Boon start out with the CAM advocate’s favorite logical fallacy, argumentum ad populum (appeal to popularity) and then proceed to their second massive flaming straw man, the claim that we in any way said that “gathering clinical evidence about popular therapies is a folly.” (I mean, seriously, did these two even actually read our article?) One more time: No one, least of all Steve and I, has said that “gathering clinical evidence about popular therapies is a folly.” What we have said is that doing RCTs on magic (like homeopathy, reiki and “energy medicine,” and treatments like chelation therapy for cardiovascular disease or the Gonzalez protocol for pancreatic cancer) is a waste of resources that can endanger patients and that negative results of these RCTs fail to persuade CAM practitioners to abandon pseudoscientific treatments. As for “pragmatic” trials, these are trials that are done to determine how effective modalities that have already been demonstrated to be efficacious in RCTs are in the “real world.” They are not intended for treatments whose efficacy has not yet determined. Using pragmatic trials as though they are efficacy trials is a strategy bound to result in false positives, because pragmatic trials usually have no placebo control group and sometimes no control group at all. It’s with good reason that Steve has referred to the use of pragmatic trials in CAM as “bait and switch” and Harriet Hall as referred to them as “Cinderella medicine.” (This is a known problem with pragmatic trials that isn’t limited to CAM, too.) No wonder Andrew Weil likes pragmatic studies so much!

Finally, no one here is arguing against patient-centered effectiveness research or comparative effectiveness research. However, once again, before we can compare the effectiveness of different treatments, we have to know that each of the treatment modalities being compared is actually efficacious; i.e., that there is actual effectiveness in each of the modalities to be compared and provide meaningful results. As for the “N-of-1” trial, homeopaths tend to like to propose such trials because to them “N-of-1” trials resemble homeopathic provings. Be that as it may, conducting N-of-1 trials is anything but trivial. Moreover, as the U.S. Agency for Healthcare Research and Quality puts it:

In parallel group RCTs, blinding of patients, clinicians, and outcomes assessors (“triple blinding”) is considered good research practice. These trials aim to generate generalizable knowledge about the effects of treatment in a population. In drug and device trials, the consensus is that it is critical to separate the biological activity of the treatment from nonspecific (placebo) effects. (For a broader view, see Benedetti et al.26) In n-of-1 trials, the primary aim is usually different. Patients and clinicians participating in n-of-1 trials are likely interested in the net benefits of treatment overall, including both specific and nonspecific effects. Therefore blinding may be less critical in this context. Nevertheless, expert opinion tends to favor blinding in n-of-1 trials whenever feasible.

See why CAM advocates love N-of-1 trials? After all, nonspecific effects are pretty much the main reason why patients and clinicians think that, for example, acupuncture works for anything.

Vohra and Boon then conclude:

We should encourage and celebrate basic science and discovery research as a way to learn about our universe, including the human body. We urge researchers to update their thinking: translational research should not be limited to one direction – from basic science to bedside applications. Complementary therapies are used by the majority of the population. There is a need to explore therapies that appear to offer clinically relevant benefit or harm by investigating their mechanism of action through basic research (i.e. bedside to bench). Basic science is clearly needed, but it should never be a reason to deny the opportunity to evaluate clinical therapies to assess their safety and effectiveness. Both types of research are informative, both advance the human condition, and both have their weaknesses. Together they can be immensely powerful.

This paragraph is just plain ridiculous. First of all, we never said that translational research should be one direction; so there’s the third major flaming straw man. In fact, we explicitly noted that it goes both ways in our paper. Second, complementary therapies are only used by the “majority of the population” if you include exercise, spirituality, vitamins, and diet (among other things) as being “complementary.” It’s a fallacy that we at SBM have discussed many times before, wherein modalities like diet and exercise are the “Trojan horse” that lets the pseudoscience past the walls of academia. I would, however, argue that Vohra and Boon are just plain wrong to argue that basic science “should never be a reason to deny the opportunity to evaluate clinical therapies to assess their safety and effectiveness.” If they don’t believe me, then I would invite them to participate in a double blind, randomized controlled trial of parachute usage to decrease the morbidity and mortality due to jumping out of airplanes. After all, if basic science should never be used as a reason to deny the opportunity to evaluate clinical therapies, then the basic science (physics) that says that a human body falling hundreds or thousands of feet at terminal velocity will go splat when it hits the unyielding earth and that this splatting will result in death or extreme injury—the basic science of biology, people!—shouldn’t be a consideration in assessing the effectiveness of parachute usage in preventing death from falls from high places.

A less extreme example would be testing intercessory prayer for cardiac conditions. Oh, wait. That’s been done. Never mind. That’s the problem with testing highly implausible claims with no physical basis in reality.

Like homeopathy for anything.

Homeopathy for ADHD?

Of course, Steve and I have to wonder why Vohra and Boon were so unhappy with our little editorial, which was only around 1,500 words. (Yes, Steve had a salutary effect on my brevity.) It turns out that we’ve met both Heather Boon and Sunita Vohra before. For example, Jann Bellamy and a certain “friend” of the blog have both taken her to task for advocating the use of pseudoscientific CAM treatments in children. It turns out that Vohra is Centennial Professor in Department of Pediatrics, Faculty of Medicine and Dentistry at the University of Alberta who holds a joint appointment in the School of Public Health. She appears to have started out evidence- and science-based but then, as too often happens, drifted into CAM, becoming:

…founding director of Canada’s first academic pediatric integrative medicine program; founding director of first pediatric integrative medicine fellowship program in North America; founding director of the world’s largest pediatric Complementary and Alternative Medicine (CAM) network (www.pedcam.ca); she advises Health Canada (Pediatric Expert Advisory Committee); led the University of Alberta’s membership in Consortium of Academic Health Centers for Integrative Medicine; is past Chair of the American Academy of Pediatrics Section on Integrative Medicine (2012-2014). Dr. Vohra was recognized for excellence in CAM research (Dr. Rogers Prize $250,000) and inducted as fellow of Canadian Academy of Health Sciences, one of the highest honours for any member of the Canadian health sciences community.

I suppose Steve and I should be honored to have attracted the attention of such a heavy hitter in the world of CAM. Based on her stature in CAM, you’d think her arguments would be better, but, as I demonstrated above, you’d be wrong.

I’m not so much interested in Dr. Vohra, though. Rather, I’m more interested in Heather Boon, because she’s been in the news in Canada lately regarding homeopathy. Also, Vohra and Boon have collaborated on clinical trials of homeopathy in children before. No wonder our little “Science and Society” editorial hit a nerve!

In any case, Heather Boon is a pharmacist who is Professor and the Dean for the Leslie Dan Faculty of Pharmacy, University of Toronto. As our own Scott Gavura discussed nearly two weeks ago, she is also the principal investigator of a study of homeopathy that’s drawn criticism from scientists all over the world:

A University of Toronto study on homeopathic treatment for children with Attention Deficit Hyperactivity Disorder is being heavily criticized by scientists who claim it legitimizes a pseudoscience.

Two Nobel laureates are among 90 scientists from universities around the world who have signed an open letter calling the clinical trial into question.

“We are curious about why, given the need to investigate natural therapies that may actually have a potential for benefit, and saddled with a scarcity in funding, a Department of Pharmacy is interested in investigating a subject that has been … found wanting both in evidence and plausibility,” reads the letter addressed to Heather Boon, dean of the U of T’s Faculty of Pharmacy, who is leading the study.

Indeed.

You would think that a pharmacist would know why homeopathy is pseudoscience, but apparently Dr. Boon does not. (If she did, she wouldn’t have written the criticism of the article by Dr. Novella and myself that she did; on the other hand, she has been critical of homeopathy in the past, which makes her criticism of our article and her spearheading this trial particularly odd) In particular, Joe Schwarcz, a chemist at McGill University and friend of the blog, has been particularly scathing in his criticism, noting that:

The study is actually to be carried out at the Riverdale Homeopathic Clinic, a private institution that also offers ear candling, cranial sacral therapy and “nosodes,” which are homeopathic versions of vaccines. No public funding is involved; support comes from a foundation dedicated to alternative medicine.

Me being me and all that and in particular me being about the details of the clinical trial, I looked up Boon’s trial on ClinicalTrials.gov (identifier NCT02086864). It’s a clinical trial of homeopathy in ADHD, a developmental disorder characterized by developmentally inappropriate levels of inattention and/or hyperactive-impulsive behavior with significant impairment in at least two settings. Boon then argues that homeopathy “has been shown to be a promising intervention for ADHD” as justification of this trial, whose three aims are:

  1. To determine if there are any specific effects of homeopathic medicines in the treatment of ADHD
  2. To determine if there any specific effects the homeopathic consultation alone in the treatment of ADHD
  3. To determine if there is an overall effect of homeopathic treatment (homeopathic medicines plus consultation) in the treatment of ADHD.

There are three groups:

  • Arm 1 (Verum group): a treatment arm where the participant will receive homeopathic consultation plus a homeopathic remedy
  • Arm 2 (Placebo group): a treatment arm where the participant will receive homeopathic consultation plus a placebo remedy
  • Arm 3 (No treatment/Usual care group): a wait list arm where the participant will not receive homeopathic treatment as part of the study.

The pilot trial to which the study justification refers is this pilot trial published last year (NCT01141634), which was an open label trial without a control group. Not surprisingly given that it was an open-label uncontrolled pilot study, this study was reported as positive, as judged by changes between the pre- and post-study Conners Global Index – Parent (CGI-P) T-score, a measure of ADHD severity.

In fact, I bet I can predict how Boon’s current study will most likely turn out. There will be no specific effect due to any homeopathic treatment, given that anything diluted more than about 11C or 12C has been diluted to the point where no drug is left and will therefore be water. Indeed, that is why homeopathy is pseudoscience. In addition, there might be an improvement based on the “personalized homeopathic consultation.” In other words, results observed in subjects in Arms 1 and 2 will likely be indistinguishable statistically from each other, and results in those arms will likely be better than the results in Arm 3. As I so frequently say in studies like this, there’s no need for the pseudoscience; the same sort of study could be done using standard treatment to see if longer, more personalized consultations produces a better result, no magic needed. In brief, this entire study is a pseudoscientific waste of time.

So how did Boon respond? She wrote a special article that was published in The Montreal Gazette in response to Prof. Schwarcz’s column, entitled “Why would anyone think a rigorous clinical trial is a bad idea?” It reads like a rehash of the article published in FACT, just directed at a lay audience. The question itself that is the title, reiterated near the beginning of the article, shows just how little Boon understands.

She is also, as they say, talking out of both sides of her mouth. She starts out asserting:

Our study — one of the most rigorous studies of homoeopathy to date — is being conducted because patients are using homoeopathy to treat ADHD. Previous studies on this topic have produced varying results and have been criticized for methodological flaws.

We are studying this topic for the same reason most scientists conduct studies: to discover the truth, or at least to get closer to it. As a scientist, I am committed to conducting high-quality studies that help patients and clinicians make evidence-informed choices about a wide range of popular products and therapies, and this study is no different.

She even boasts that her study had received approval from Health Canada and two Research Ethics Review Boards while expressing puzzlement how a “well-designed clinical trial legitimizes anything, as we don’t know what the study will find.” Again, Boon completely misses the point, just as she did in her criticism of our article. Her clinical trial is studying pixie dust. It’s tooth fairy science, as it is studying the efficacy of a treatment whose very premises are based on vitalistic magic. There is no scientific reason to think that homeopathy will benefit ADHD. It is water.

She then justifies her choice:

Biological plausibility is one way to choose which interventions to study, and is the most common way we bring new pharmaceutical drugs to market. Given the number of promising drug candidates that never make it to market, it is clear that biological plausibility does not always translate into efficacy or effectiveness in patients. Another way to identify interventions to study is to start with an observation — often claims from patients that a product, treatment or therapy helps them — and then set out to investigate this observation. In either approach, determining whether something ultimately “works” or not is achieved through investigation in rigorously designed clinical trials.

Again, no one is arguing that you always have to start with preclinical observations. In fact, the paradigm is often “bench to bedside to bench” or “bedside to bench to bedside.” As Steve and I pointed out in our article, clinical observations and lab observations frequently cross-pollinate. The idea is that if there is a clinical observation of an effect, then the first course is to go to the lab and try to figure out why that might be. Of course, for such an approach to work, regardless of what comes first, there has to be a convincing clinical observation, and certainly Boon and Vohra’s pilot study of homeopathy for ADHD does not qualify, nor do any studies of homeopathy, as even the “positive” ones tend to be just barely “statistically significant,” poorly designed, or fraught with potential bias. In other words, when it comes to homeopathy there’s no “there” there. For a clinical observation to have even a hope of trumping the nearly two centuries’ worth of basic science that tells us homeopathy is impossible would require unequivocal, undeniable observations on multiple fronts.

Oddly enough, Boon then backtracks a little, as if to say, “Oops! We’re not trying to determine if homeopathy works after all!” See what I mean:

Our pilot study was designed to tell us about patients’ experiences using homeopathic therapies to treat ADHD, not to assess the efficacy of those treatments. Many patients in this study felt better after visiting a homeopath. Of course, there are many possible explanations for this, which is why we have embarked on another, more comprehensive study to investigate two possible explanations. Employing a randomized, placebo-controlled three-arm study, we will be able to identify whether or not patients in the trial have a decrease in symptoms after homeopathic treatment. If patients’ symptoms decrease, we will be able to explore whether this effect appears to be related to the consultation with the homeopath or to the homeopathic medicine or a combination of both.

Except that, as I mentioned above, you don’t need to have a pseudoscientific remedy like homeopathy to do a study like this. Just do it with standard care for ADHD. It’s unethical to use a pseudoscientific and ineffective treatment like homeopathy in a clinical trial, as the most recent iteration of the Declaration of Helsinki points out, stating that “medical research involving human subjects must conform to generally accepted scientific principles, be based on a thorough knowledge of the scientific literature, other relevant sources of information, and adequate laboratory and, as appropriate, animal experimentation.” Unfortunately, ethics boards never seem to understand that homeopathy conforms to none of these things.

Boon concludes with a particularly nauseatingly sanctimonious, self-righteous rhetorical question:

We are putting homeopathy to the test and trusting in the scientific method to help us in our search for the truth.

After all, isn’t that the purpose of research?

My retort: We have already trusted in the scientific method that tells us that, for homeopathy to work, several laws of physics would have to be not just wrong, but spectacularly wrong. That scientific method tells us that there’s no point in subjecting children with ADHD to homeopathy. As scientists, we always acknowledge that there is a tiny chance those mountains of research in physics, chemistry, and biology that tell us homeopathy is impossible could be mistaken. However, for homeopathy to be shown to have plausibility distinct from zero would require basic science in favor of the principles of homeopathy on the order of quality and quantity of research since the time of Amedeo Avogadro, whose number demonstrated that homeopathy is bunk. That research doesn’t exist. Respecting science doesn’t mean subjecting human subjects in clinical trials to quackery like homeopathy in the futile hope that it will work when the research to demonstrate that homeopathy is pseudoscience has existed for over 200 years and been accepted for at least 150 years.

In reality, it is not Boon who respects science. By embracing homeopathy as a legitimate research topic, she has embraced pseudoscience. It is Joe Schwarcz and the scientists and physicians who signed his letter who do.

ADDENDUM: Steve Novella has also posted a rebuttal to Vohra and Boon entitled “Basic Science Should Inform Clinical Science.”

 

 

Posted by David Gorski

Dr. Gorski's full information can be found here, along with information for patients. David H. Gorski, MD, PhD, FACS is a surgical oncologist at the Barbara Ann Karmanos Cancer Institute specializing in breast cancer surgery, where he also serves as the American College of Surgeons Committee on Cancer Liaison Physician as well as an Associate Professor of Surgery and member of the faculty of the Graduate Program in Cancer Biology at Wayne State University. If you are a potential patient and found this page through a Google search, please check out Dr. Gorski's biographical information, disclaimers regarding his writings, and notice to patients here.