If there’s one thing I’ve learned over the past seven years or so that I’ve been blogging, first at my other “super secret” (or, more accurately, super “not-so-secret”) blogging location, and then the four years I’ve been blogging here at Science-Based Medicine (SBM), it’s that the vast majority of “alternative medicine,” “complementary and alternative medicine” (CAM), and “integrative medicine” (IM) treatments (or whatever you want to call them) are nothing more than placebo medicine. True, there are exceptions, such as herbal treatments, mainly because they can contain chemicals in them that are active drugs, but any critical look at things like homeopathy (which is water), reiki (which is faith healing substituting Eastern mystical beliefs for Christianity), acupuncture (whose effects, when tested rigorously, are found to be nonspecific), or “energy healing” must conclude that any effects these modalities have are placebo effects or responses. Given writings on this topic by Steve Novella, Mark Crislip, Harriet Hall, Peter Lipson, myself, and others, this should be abundantly clear to readers of this blog, but, even so, it bears repeating. In fact, it probably can’t be repeated enough.

There was a time not so long ago when proponents of unscientific medicine tried very, very hard to argue that their nostrums have real effects on symptoms and disease above and beyond placebo effects. They would usually base such arguments on small, less rigorously designed clinical trials, mainly because, if there’s another thing I knew before from my medical education but that has been particularly reinforced in me since I started blogging, it’s that small clinical trials are very prone to false positives. Often they’d come up with some handwaving physiological or biological explanation, which, in the case of something like homeopathy, often violated the laws of chemistry and physics. Be that as it may, the larger and more rigorously designed the clinical trial, the less apparent effects become until, in the case of CAM therapies that do nothing (like homeopathy), they collapse into no effect detectable above that of placebo. Even so, there are often enough apparently “positive” clinical trials of water (homeopathy) that homeopaths can still cling to them as evidence that homeopathy works. Personally, I think that Kimball Atwood put it better when he cited a homeopath who said bluntly, “Either homeopathy works, or clinical trials don’t!” and concluded that, for highly implausible treatments like homeopathy, clinical trials as currently constituted under the paradigm of evidence-based, as opposed to science-based, medicine don’t work very well. Indeed, contrasting SBM with EBM has been a major theme of this blog over the last four years. In any case, for a long time, CAM enthusiasts argued that CAM really, really works, that it does better than placebo, just like real medicine.

Over the last few years, however, some CAM practitioners and quackademics have started to recognize that, no, when tested in rigorous clinical trials their nostrums really don’t have any detectable effects above and beyond that of placebo. A real scientist, when faced with such resoundingly negative results, would abandon such therapies as, by definition, a placebo therapy is a therapy that doesn’t do anything for the disease or condition being treated. CAM “scientists,” on the other hand, do not abandon therapies that have been demonstrated not to work. Instead, some of them have found a way to keep using such therapies. The way they justify that is to argue that placebo medicine is not just useful medicine but “powerful” medicine. Indeed, an article by Henry K. Beecher from 1955 referred to the “powerful placebo.” This construct allows them then to “rebrand” CAM unashamedly as “harnessing the power of placebo” as a way of defending its usefulness and relevance. In doing so, they like to ascribe magical powers to placebos, implying that placebos can do more than just decrease the perception of pain or other subjective symptoms but in fact can lead to objective improvements in a whole host of diseases and conditions. Some even go so far as to claim that there can be placebo effects without deception, citing a paper in which the investigators — you guessed it! — used deception to convince their patients that their placebos would relieve their symptoms. Increasingly, placebos are invoked as a means of “harnessing the power of the mind” over the body in order to relieve symptoms and cure disease in what at times seems like a magical mystery tour of the brain.

Part of what allows CAM practitioners to get away with this is that placebo effects are poorly understood even by most physicians and, not surprisingly, even more poorly understood by the public. Moreover, we all like to think that we have more control than we do over our bodies and, in particular, illnesses and symptoms, which is why the selling of placebo effects as a means of harnessing some innate hidden power we have to control our own bodies through the power of mind is so attractive to so many, including some scientists and physicians. Exhibit A is Ted Kaptchuk, the researcher from Harvard University responsible for spinning an interesting study of placebo effects in asthma into the invocation of the power of placebo. Kimball Atwood has written extensively about Kaptchuk recently, revealing his rather dubious background and arguments. More recently, however, Kaptchuk seems to be everywhere, appearing in articles and interviews, promoting just the argument I’m talking about, that CAM is a way of harnessing placebo effects, so much so that I felt it was time to take a look at this argument.

He’s here, he’s there, he’s everywhere

Over the last month or so, Ted Kaptchuk has been seemingly all over the place in some very prominent media outlets, including:

  1. An article in the Wall Street Journal by Shirley S. Wang, Why Placebos Work Wonders: From Weight Loss To Fertility, New Legitimacy For ‘Fake’ Treatments.
  2. An article in the New Yorker by Michael Specter, The Power of Nothing: Could Studying the Placebo Effect Change the Way We Think About Medicine?
  3. An article in The Atlantic by Elaine Schattner, The Placebo Debate: Is It Unethical to Prescribe Them to Patients?
  4. An interview on Boston’s WBUR with Jessica Alpert, Demystifying The Power Of “The Placebo Effect”
  5. An interview on Science Friday with Ira Flatow, One Scholar’s Take On The Power of The Placebo

Much of what was published in these stories reminds me just how badly placebos are misunderstood. Perhaps the best (or, more appropriately, the “worst”) example is the article by Shirley S. Wang. If you want a primer on how not to write about placebos as a journalist, I’d be hard pressed to find a better example for you to study than this article. Wang falls for all the CAM/IM tropes that they like to use to demonstrate that their methods are anything more than ineffective methods that can provoke a placebo response. In fact, jumping ahead in her article a bit, I know she has no understanding of the issues involved — or even an active misunderstanding — when she writes a passage like this:

Ted Kaptchuk, director of Harvard’s Program in Placebo Studies and the Therapeutic Encounter, and colleagues demonstrated that deception isn’t necessary for the placebo effect to work. Eighty patients with irritable bowel syndrome, a chronic gastrointestinal disorder, were assigned either a placebo or no treatment. Patients in the placebo group got pills described to them as being made with an inert substance and showing in studies to improve symptoms via “mind-body self-healing processes.” Participants were told they didn’t have to believe in the placebo effect but should take the pills anyway, Dr. Kaptchuk says. After three weeks, placebo-group patients reported feelings of relief, significant reduction in some symptoms and some improvement in quality of life.

Wang so completely fell for the spin that Kaptchuk put on this study and so obviously doesn’t understand what she’s writing about that it makes me instantly question the rest of her article, particularly the parts where she cites studies. Fortunately, in this case, several of us have already blogged about the actual primary article cites and explained exactly why Kaptchuk’s spin that the study indicated that it was possible to induce placebo responses without deception. The long version is in this is in this post I wrote about a year ago. The short version is the observation that subjects were recruited for this study by ads touting a “novel mind-body management study of IBS [irritable bowel syndrome],” which introduced selection bias for people prone to be interested in “mind-body” interactions. Moreover, while it is true that Kaptchuk and his team told subjects that they would be receiving placebos, but they also told subjects that the sugar pills used “have been shown in rigorous clinical testing to produce significant mind-body self-healing processes,” which is, to put it kindly, an exaggeration. Add to all this the way outcomes were measured were custom-designed to exaggerate. The no-treatment arm demonstrated an IBS Global Improvement Score of 4 (no change) compared to the Open Placebo arm, which averaged 5 (slightly improved). This is highly unlikely to be clinically significant. Despite all these problems, this study was widely touted as somehow being slam-dunk evidence that placebo effects can be invoked without deception when it is anything but. The best possible spin that could be put on this study is that it is consistent with previous work that expectation effects are important in placebo effects. In other words, if you expect an effect, even if you know you’re taking a placebo, you’re more likely to feel better.

Of course, I shouldn’t be too hard on Wang, I suppose, at least not for this. After all, it apparently fooled Edzard Ernst himself into calling it “elegant.” I’ll also mention that she also discusses (and gets mostly right) another study that I’ve blogged about just last summer. In fact, it was a study that was prominently featured in the discussion panel that I participated in at TAM last summer, along with Steve Novella, Kimball Atwood, Mark Crislip, Harriet Hall, Rachael Dunlop, and Ginger Campbell. Steve even mischievously switched back and forth between two of the graphs in the paper to make a point. Yes, I’m referring to the “placebo in asthma” study, or, as I called it, Spin City, or as Peter Lipson referred to it Asthma, placebo, and how not to kill your patients. Wang correctly points out that only the active treatment (albuterol) improved the underlying biology but that both groups felt better. This is more or less the very definition of placebo effects: feeling better without any actual physiological improvement or correction of the underlying pathophysiology. Yet that’s not the overall impression that her article gives, as she cites a number of studies that suggest that placebo effects are more than just an effect on “how a person experiences or reacts to an illness.” She even uses an argument from popularity, pointing out how many physicians knowingly prescribe placebos based on a study from 2008 which, I can’t help but mention, I also blogged about when it came out, as did Abel Pharmboy, Janet Stemwedel, Jake Young, revere, and Peter Lipson, who quite aptly said about this study, “Placebo — I do not think it means what you think it means.” The reason is that the authors counted many things as placebo, including pills known to have actual pharmacologic activity and how the authors defined placebo, as Peter pointed out:

In the current study, a placebo is defined as “positive clinical outcomes caused by a treatment that is not attributable to its known physical properties or mechanism of action.” This implies that the physician either knows the treatment shouldn’t work, or doesn’t understand how it works. This isn’t just semantics; we have many treatments available whose exact mechanism of action isn’t known, but whose effectiveness has been proved. If you interpret the definition less strictly, it oxymoronically defines a placebo as something that works despite it’s lack of efficacy. If I prescribe something expecting a predictable effect, and it produces that effect, by definition it isn’t a placebo. If I prescribe something I expect to work, and it doesn’t, then it isn’t a placebo. If I prescribe something expecting failure, but it works, I’m a lucky idiot. This would seem to imply that there is no such thing as a placebo (and I might agree).

Certainly, our colleague Mark Crislip agrees, going so far as to refer to the placebo myth and the prostrate placebo (although my favorite Crislip-ism about CAM and placebos was when Crislip referred to CAM as the beer goggles of medicine).

Unfortunately, even Michael Specter, whom I’ve generally considered to be a good science journalist, wasn’t up to his usual standards in his New Yorker article. As Steve Novella pointed out, he hit most of the high points but clearly seemed too influenced by Kaptchuk, who, complete with various anecdotes about placebo effects, was portrayed as a “brave maverick doctor” who is “reviled” by the rest of the medical profession for challenging its paradigms.

Elaine Schattner did better, although even she seemed enamored of the spin that Kaptchuk puts on placebo. Indeed, she even accepted his description of his studies:

You might wonder, lately, if placebos can confer genuine health benefits to some people with illness. If you’re reviewing a serious publication like the New England Journal of Medicine, you could be persuaded by results of a recent article on giving placebos to asthma patients in a randomized clinical trial. The topic has blossomed since 2008, when PLoS One reported on the use of mock treatments, without concealment, in people with irritable bowel syndrome. In that small study, participants experienced symptomatic relief even though they knew they were getting bogus remedies.

No, no, no, no. The NEJM article showed nothing of the sort; there was no benefit to patients receiving placebo. That was the point, albeit danced around by Kaptchuk and Dan Moerman. And, again, it bears repeating that the IBS study did not show that the use of mock treatment “without concealment” results in symptomatic improvement. As I wrote before, there wasn’t the traditional type of concealment in that the patients were aware they were receiving placebo, but there was deception in that placebos were represented to the patient as producing “powerful mind-body healing effects” (at the very least there was exaggeration). The strength of Schattner’s article is that it emphasized that it’s unethical to deceive patients. Harriet Hall says it; bioethicist Frank Miller says it; even Ted Kaptchuk says it. Its weakness seems to be in taking Kaptchuk’s spin on his work at face value. Unfortunately, Ira Flatow, from whom I would expect better, falls even harder for the same spin. He interviewed Kaptchuk for a Science Friday segment just last Friday, where, at the very beginning he accepts Kaptchuk’s spin on the IBS study, declaring at the beginning of the interview that it is evidence that “placebos work even when patients are in on the secret.”

Meanwhile, Jessica Alpert declares:

Right here in Boston, in the heart of the city’s respected and centuries-old medical establishment, there’s a research center that is challenging long held views about how we treat illness, how we measure the effectiveness of treatment, and how the power of the mind — rather than drugs — can actually cure illness.

We’re talking about new and controversial research into the so-called placebo effect. There’s growing evidence that placebos can actually help cure people, or at least make them feel better. If true, this could spark a major revolution in medicine, because traditionally placebos have had a bad name. After all, they’re usually a fake pill, nothing more than a bit of sugar, for example, designed to deceive people in clinical drug trials.

But now there’s evidence that in some cases placebos — even when people knowingly take them — work just as well as real drugs and actually make people better. This has potentially enormous implications in the way we think about medicine, and how we understand the power of suggestion and belief in healing.

With such credulous and uncritical reporting about placebo effects running rampant, with even good science journalists like Michael Specter tainting his reporting on placebos with a bit too much credulity towards the claims of someone like Ted Kaptchuk, is it any wonder that so many people think that there’s something mystical about how placebos work (or don’t work) or that so many think that placebos represent “mind-body healing” or the power of Mind (capitalization intentional) over matter?

Some recent science on placebo effects

All too often, “placebo” seems to mean exactly what people choose it to mean, no more, no less (apologies to Lewis Carroll). In reality, a placebo is nothing more than “a substance or procedure a patient accepts as medicine or therapy, but which has no specific therapeutic activity” or, as Wikipedia now defines it, “simulated or otherwise medically ineffectual treatment for a disease or other medical condition intended to deceive the recipient.” This strikes me as a better definition than the old definition because it emphasizes that placebos are medically ineffectual and that they involve deceiving the patient. Indeed, the necessity of deception is, Kaptchuk’s claims otherwise, part and parcel of placebo use, which is one key reason why using placebos has fallen out of favor. Using placebos outside of a clinical trial is now generally considered at best paternalistic and at worst downright unethical, because it violates informed consent and patient autonomy. Sixty or seventy years ago, it was considered acceptable for physicians to deceive patients that way. In 2012, not so much.

In any case, what we call the “placebo effect” is not a single effect, and it has many components. Placebos are actually best viewed as a rather artificial tool used in clinical trials to control for nonspecific effects. There are expectation effects, in which patients experience what they are led to expect to experience. There are effects due to observation. Patients in clinical trials almost always do better than those not in clinical trials, thanks to the closer attention and more rigorous treatment protocols. This has sometimes been referred to as the “clinical trial effect,” and both patients and doctors often unconsciously modify their behavior based on their knowledge that they are being observed, an effect known as the Hawthorne effect. Then there are effects due to reporting, which can introduce bias. There are effects due to regression to the mean, which describes how patients interact with the natural waxing and waning of their symptoms. When their symptoms are at their worst is when patients will tend to try a treatment. Because most symptoms will wax and wane, even if nothing is done there’s a good chance that a patient’s symptoms will “regress to the mean” on their own even if the patient does nothing. However, if a patient has taken a remedy, even a placebo like homeopathy, at a time when their symptoms are at their worst, it’s very common for them to attribute their improvement to the medication. Correlation, of course, does not equal causation. In any case, depending on the timing of the clinical trial’s measurement, regression to the mean can play a role in placebo effects.

Given how complex placebo effects are and how many different variables determine the magnitude of placebo effects, it should not be surprising are a whole host of other factors that determine how strong a placebo effect will occur in any situation. Surgery and invasive procedures are more powerful placebos than pills, for instance. More expensive placebos tend to produce stronger apparent effects. Indeed, there’s a whole hierarchy of placebos, and placebo effects can be enhanced by things like empathy and the doctor-patient relationship. Traditionally, it’s been believed, based on a number of lines of evidence, that practitioner empathy has a major effect on placebo effects.

Just last month a group out of the University of Southhampton published a rather intriguing study that somewhat challenges that paradigm. The study, which was published online ahead of print last month is entitled Practice, practitioner, or placebo? A multifactorial, mixed-methods randomized controlled trial of acupuncture. Its conclusions are a mixture of the provocative and the mundane (i.e., in line with what we already know). It’s a rather complicated study design to explain, which opens it up to concerns on my part that it’s a bit too complicated to draw firm conclusions from. Let’s start with the abstract:

The nonspecific effects of acupuncture are well documented; we wished to quantify these factors in osteoarthritic (OA) pain, examining needling, the consultation, and the practitioner. In a prospective randomised, single-blind, placebo-controlled, multifactorial, mixed-methods trial, 221 patients with OA awaiting joint replacement surgery were recruited. Interventions were acupuncture, Streitberger placebo acupuncture, and mock electrical stimulation, each with empathic or nonempathic consultations. Interventions involved eight 30-minute treatments over 4 weeks. The primary outcome was pain (VAS) at 1 week posttreatment. Face-to-face qualitative interviews were conducted (purposive sample, 27 participants). Improvements occurred from baseline for all interventions with no significant differences between real and placebo acupuncture (mean difference −2.7 mm, 95% confidence intervals −9.0 to 3.6; P = .40) or mock stimulation (−3.9, −10.4 to 2.7; P = .25). Empathic consultations did not affect pain (3.0 mm, −2.2 to 8.2; P = .26) but practitioner 3 achieved greater analgesia than practitioner 2 (10.9, 3.9 to 18.0; P = .002). Qualitative analysis indicated that patients’ beliefs about treatment veracity and confidence in outcomes were reciprocally linked. The supportive nature of the trial attenuated differences between the different consultation styles. Improvements occurred from baseline, but acupuncture has no specific efficacy over either placebo. The individual practitioner and the patient’s belief had a significant effect on outcome. The 2 placebos were equally as effective and credible as acupuncture. Needle and nonneedle placebos are equivalent. An unknown characteristic of the treating practitioner predicts outcome, as does the patient’s belief (independently). Beliefs about treatment veracity shape how patients self-report outcome, complicating and confounding study interpretation.

It’s probably easier to borrow the figure in the paper that explains how this study was laid out (click on the image to enlarge):

Note how this design results in eighteen different experimental groups, based on three practitioners treating patients either with the “empathic protocol” or the “non-empathic” protocol, each of which is further divided into groups of real acupuncture, sham acupuncture (Streitberger placebo acupuncture technique), or mock electrical stimulation. It is these groups that were analyzed in different combinations (for example, empathic versus non-empathic, sham versus real acupuncture, practitioner 1 versus practitioner 2 versus practitioner 3). In the end, there were 221 patients randomized, and each small experimental group ended up with between 5 and 20 subjects, for a total of 221 patients. To assess for possible confounding factors, investigators also recorded “attitudes towards complementary medicine holistic complementary and alternative medicine questionnaire (HCAMQ),” empathy consultation and relational empathy (CARE) questionnaire, analgesic intake (tablet count), and needling sensation. Patients were also given a daily pain diary (100 mm visual analogue scale VAS) to complete for 7 (pretreatment) days and during treatment. Finally, patients were asked at treatment completion, “Do you think the treatment you had was real” and required to give a simple yes or no answer. One strength of the study is that high percentages of subjects believed (75% to 96%, depending on group) that they were receiving “real” treatments.

Let’s start with the unsurprising result of the study. Basically, it was found that all three main groups, real acupuncture, sham acupuncture, and mock electrical current, all experienced a decrease in pain and that there was no difference between them. This result is, of course, completely consistent with what studies have found time and time again about acupuncture, namely that it performs no better than placebo. Similarly, the observation that belief in the therapy (i.e., belief that the subject was receiving “real” therapy and that the therapy would be effective) was correlated with improved outcomes in pain was expected and consistent with previous literature about placebo responses. The result that was surprising was that there was no reported difference between patients receiving empathic and non-empathic treatment. This finding the authors reported, was unexpected. But what does it mean?

In this study, empathic treatments were described thusly:

Empathic (EMP) consultations were deemed to be normal pragmatic treatment sessions. Patients were greeted in a friendly, warm manner and were free to enter into conversation with their practitioner, who in turn would willingly do so. Practitioners did their utmost to comply with participants’ wishes, providing detailed answers to questions and emphasising patient comfort and well-being.

Non-empathic interactions consisted of this:

This encounter was more “clinical” in nature. Patients were greeted in an efficient manner and quietly shown to the treatment cubicle. Practitioners would only discuss matters directly relating to the treatment to enable them to effectively carry out that treatment, e.g., pattern of pain and side effects. Necessary explanations were kept as short as possible, and if patients attempted to enter into any discussion, the practitioner would respond using the words “I’m sorry but because this is a trial I am not allowed to discuss this with you.” Between needle stimulations, patients were left on their own in a curtained cubicle.

Based on previous results, we would expect that patients receiving the empathic consultations would report more pain relief, but such was not the case in this study. The authors speculated quite a bit about why this was, from their questionnaire not reflecting empathy adequately to a rather interesting potential confounder in which patients made excuses for the non-empathic interactions:

Participants in empathic consultations described practitioners as caring, friendly, and communicative. Those in nonempathic consultations undertook a little more work to explain their similarly positive views of their practitioners. Thus interviewees who had received nonempathic consultations talked about how they colluded with the practitioner to obey the study rules and have limited personal interactions. They suggested that the practitioners were not really nonempathic, they were just acting that way for the sake of the trial. For example, “I had the feeling that she sort of felt that you know, not being able to converse properly, that she felt a bit awkward about it” (Betty, nonempathic).

To me this suggests that patients really, really want to believe that their practitioners care about them and will go to great lengths to align their interpretations of observed behavior with that belief so that perhaps empathy isn’t as powerful an inducer of the placebo effect as we might expect.

Another intriguing result of this study was the observation of a definite practitioner effect independent of consultation type. Specifically, one practitioner (Practitioner 3) produced consistently better outcomes across all treatment and consultation types. The investigators report that this was observed “in spite of all the meticulous care and planning taken to ensure consistency of treatment delivery among the three practitioners.” Why was this? It wasn’t empathy, but rather it was something that wasn’t being measured in the study design, something unknown. The authors couldn’t identify it, but they did speculate that it has to do with patients viewing Pracitioner 3 as being more authoritative, expert, and confident:

The qualitative data suggested that the interviewees perceived practitioner 3 as a paternalistic male authority figure. Practitioner 3, as the primary investigator, might have been seen by patients as the expert, consequently establishing higher expectations of success, which in turn influenced outcome. Although this is consistent with previous research [10] and [29] a larger explanatory study involving many practitioners is needed.

This was in contrast to Practitioner 1:

Interviewees referred to Practitioner 1 by her first name and as a “girl” and a “young lady,” and some described her using affectionate terms such as “sweet.” Practitioner 3 was referred to as “Doctor,” was never referred to by his first name, and was typically described in more respectful than affectionate terms, including “courteous” and “formal but friendly.” Participants seem to have seen practitioner 3 as more authoritative than practitioner 1.

This is, of course, an observation that opens a can of worms that will be very difficult to deal with. Kimball Atwood once referred to integrative medicine and patient-centered care as the “new paternalism.” I chuckled when I read Atwood’s take on the issue, but I have to admit that he is probably more correct than I had wanted to admit. It might well be that more modern constructs in medicine that involved truly informed consent and respect for patient autonomy might be directly in conflict with Kaptchuk’s apparent vision of physicians as healing shamans whose interactions with patients are as powerful at healing as the medications and procedures they have at their disposal.

Placebo: Real versus fantasy?

In the end, I’m coming to agree more with Mark Crislip than I used to in that I’m starting to question whether placebos are nearly as powerful as they are commonly advertised, although I don’t think I go so far as to call them a myth. Placebo effects, more than anything else, appear to involve changes in how pain or subjective symptoms are perceived, not any physiological change that concretely affects the course of a disease. Consistent with this concept, I have yet to come across a study that provides serious objective evidence that placebos change “hard” objective outcomes, such as survival in cancer. What placebo is frequently claimed to be by advocates like Kaptchuk but is almost certainly not is “mind over matter” or thoughts and mind controlling health. Unfortunately, Kaptchuk and his ilk frequently find willing mouthpieces like Wang to spread this message because it’s such a seductively appealing message. After all, who doesn’t want to believe that we can control our health with our minds? Who doesn’t want to feel that powerful, particularly when disease strips us of control? Unfortunately, the overselling of nonspecific placebo effects leads to the impression that somehow placebos can cure disease, even though it’s been shown time and time again that placebo effects do not shrink tumors or change the underlying pathophysiology of disease. As has been reaffirmed in a recent Cochrane review (ironically, coauthored by Ted Kaptchuk), there is no good evidence for objective responses due to placebo; placebos serve, more than anything else, to change a patient’s perception of his or her symptoms. Beer goggles of medicine, indeed.

Unfortunately, people like Kaptchuk who believe in CAM draw the wrong conclusions from their work on placebo effects, even when that work is actually pretty good in and of itself. To Kaptchuk and scores of other quackademics, placebo effects provide a new rationale to use CAM even though the vast majority of it is placebo medicine. By that I mean physiologically inert but represented by practitioners to patients as real medicine. Clinical trials, as ill-advised as many of them are, continue to reinforce that conclusion. In medicine, when a treatment performs no better than placebo, it is interpreted, and correctly so, as meaning that treatment doesn’t work. Thanks to the magic of “mind-body” placebos, propagandists like Kaptchuk have found a new rationale to use the ineffective treatments that make up so much of CAM.

Michael Specter quotes Ted Kaptchuk as asking, “Do you think this entire field is based on a foundation of magical thinking, or do you not?” That is the wrong question, a massive strawman in fact. No one, least of all myself, that I’m aware of is arguing that the entire field of placebo medicine is based on magical thinking. In fact, I find studies of placebo effects intriguing and often worthwhile. I am, however, arguing that the way that people like Ted Kaptchuk co-opt placebo effects as evidence for “powerful mind-body healing” or as a rationale for using placebos like acupuncture, homeopathy, or “energy healing” is based on magical thinking. After all, we already know that empathy and paying attention to patients improves their perception of their symptoms and treatment with SBM also has a placebo component. We don’t have to invoke magic or pseudoscience or deceive patients paternalistically in order to maximize these effects; yet that is what Kaptchuk and his fellow travelers are implicitly advocating through their rebranding of CAM as “placebo medicine” and their rebranding of the placebo as “powerful mind-body healing.” In the end, all too much of the rebranding of CAM as placebo and the selling of placebos as some sort of powerful “mind-body healing” strikes me as being much like The Secret, in which wishing makes it so.



Posted by David Gorski