We frequently write about placebo effects here at SBM, because they are a misunderstood phenomenon that is frequently abused by purveyors of unscientific and pseudoscientific treatments that fall under the rubric of alternative medicine, “complementary and alternative medicine” (CAM), or, as CAM is now more frequently called, “integrative medicine” or “integrative health.” (Note the plural, as there is no one “placebo effect”.) We’ve noted that understanding current science about placebo effects is critical to understanding science-based medicine, because misconceptions about placebo effects are rampant among even scientific and medical professionals who otherwise embrace a science-based worldview and those misconceptions make them prone to claims by practitioners of pseudoscientific medicine involving the attribution of great “healing power” to placebo effects. As our fearless leader Steve Novella has noted, the persistence of these misconceptions is due partly to the fact that false narratives about placebos, namely that “the” placebo effect (as opposed to placebo effects) is mainly an mind-over-matter effect based on expectation or even “harnessing the power of positive thinking“, is deeply embedded in our culture. This belief has started to infiltrate the belief system of even doctors who should know better, aided and abetted by recent attempts by CAM proponents to promote placebo medicine as their preferred treatments are increasingly shown to be nothing but placeboes. They are, in essence, falsely “rebranding” CAM as “harnessing the power of placebo.”
Over the last ten years of writing for SBM, I haven’t (quite) gone as far as Mark Crislip (who is missed here) to conclude that placebo effects are a myth, or, as he put it in his own inimitable way, the “beer goggles of medicine“, in which the patient’s perception is changed but nothing else. On the other hand, there is hard evidence that placebo effects can lead people with real diseases like asthma into a false sense of security. A 2011 study found that a placebo could make asthma patients feel less short of breath to nearly the same degree as a pharmacological treatment (albuterol), but that in reality nothing about their physiology had changed in response to the placebo intervention and their pulmonary function tests were still just as bad. This is the sort of false sense of security that can lead to death.
Still, doctors studying “integrative medicine” looking for evidence that ineffective treatments like acupuncture, energy medicine, homeopathy, and the like have real physiological effects can’t resist the placebo narrative because increasingly the evidence that their preferred treatments have no detectable effects above placebo is becoming harder to ignore. So the placebo narrative has evolved, just as CAM evolved into “integrative medicine” as a form of rebranding. In this new narrative, placebo effects aren’t psychological at all, but have a real physiological basis. In other words, they’re real! Of course, most skeptics have never actually said that placebo effects aren’t real, just that they probably aren’t that powerful, are likely mostly artifacts of the clinical trial process, and, most importantly, are not what CAM practitioners claim that they are and not a justification for the use of unscientific, pseudoscientific, and mystical treatments. Indeed, our argument is that, if placebo effects are more than an illusion, one should not use the deception of ineffective pills or treatments to invoke them because (1) deceiving patients is now considered unethical, a throwback to a much more paternalistic form of medicine in which lying to patients “for their own good” was considered acceptable and (2) there’s no reason why what we learn about the role of practitioner-patient interaction and placebo effects can’t be used to make science-based medicine better, more effective, and more satisfying to patients, all without deception.
So it was that I saw a long article Gary Greenberg in last week’s The New York Times Magazine entitled “What if the Placebo Effect Isn’t a Trick?”
The false premise of placebo
Regular readers probably realized right away upon reading the title of the article that it would be like waving the proverbial cape in front of a bull, particularly given the blurb for the article, “New research is zeroing in on a biochemical basis for the placebo effect — possibly opening a Pandora’s box for Western medicine”. Notice how the framing of the article is that somehow “Western medicine” is afraid of the new science being published about placeboes. I can’t help but point out right here how much I can’t stand the term “Western medicine,” which in these narratives always implies a form of medicine that is scientific, reductionist, and non-holistic compared to, of course, “Eastern medicine” or “holistic medicine” and “integrative medicine,” which are portrayed as being just the opposite. It’s an arguably racist construct that portrays “Western medicine” as scientific and “Eastern medicine” as inscrutable, mysterious, and more attuned to emotion, belief, and the whole person. Sure, there can be cultural differences in medical beliefs and how science is applied, but science has become over the last century basically universal. Medical scientists in Asia would likely beg to differ that their science is somehow inferior to “Western” medical science.
That’s just one of my pet peeves, though, and Gary Greenberg probably had had nothing to do with the title and blurb for his article. (New York Times Magazine editor, I’m talking to you.) Unfortunately, Greenberg can’t resist a very conventional reading of the placebo narrative to start his article out, starting out with a dig at Mayor Henri Lenferink of the Dutch city of Leiden, where the 300 members of the Society for Interdisciplinary Placebo Studies met, who called placebo medicine “fake medicine.” (He was probably more close to the true situation than most of the members of the Society.) Now here’s the dig:
Lenferink might not have been so glib had he attended the previous day’s meeting on the other side of town, at which two dozen of the leading lights of placebo science spent a preconference day agonizing over their reputation — as purveyors of sham medicine who prey on the desperate and, if they are lucky, fool people into feeling better — and strategizing about how to improve it. It’s an urgent subject for them, and only in part because, like all apostate professionals, they crave mainstream acceptance. More important, they are motivated by a conviction that the placebo is a powerful medical treatment that is ignored by doctors only at their patients’ expense.
Greenberg apparently doesn’t realize that these “leading lights of placebo science” have no one to blame but themselves for their reputation, given how, as we’ve documented time and time again on this blog, they have an annoying tendency to exaggerate the “power” of placebo effects. Also note the narrative of how they are “apostate professionals.” That term implies that science is a religion and that they only reason these scientists are viewed so dimly is because they have rejected the dominant religion, rather than because they have not yet produced evidence sufficiently compelling to persuade mainstream science that they’re on to something. I really, really, really detest when journalists resort to such lazy tropes about science. Science is not a religion, as tempting as it might be to portray it otherwise.
And after a quarter-century of hard work, they have abundant evidence to prove it. Give people a sugar pill, they have shown, and those patients — especially if they have one of the chronic, stress-related conditions that register the strongest placebo effects and if the treatment is delivered by someone in whom they have confidence — will improve. Tell someone a normal milkshake is a diet beverage, and his gut will respond as if the drink were low fat. Take athletes to the top of the Alps, put them on exercise machines and hook them to an oxygen tank, and they will perform better than when they are breathing room air — even if room air is all that’s in the tank. Wake a patient from surgery and tell him you’ve done an arthroscopic repair, and his knee gets better even if all you did was knock him out and put a couple of incisions in his skin. Give a drug a fancy name, and it works better than if you don’t.
Not exactly. The first example should read: Give people a sugar pill and those patients are more likely to report that they have improved, whether they actually have improved from a physiologic perspective or not.
I will admit that some of these examples I hadn’t heard of before. (No one, not even I, can keep up with all the placebo literature all the time, particularly when I have to keep up with the surgical literature in my specialty and the research literature covering my area of research interest.) I was also rather annoyed that no links to the actual studies were included in the online version of the article. (Bad NYTM! Bad! Bad!) Seriously, this is 2018. When are newspapers going to reliably include the damned link to any study referenced in a story?
Another of my pet peeves aside, I hadn’t seen the milkshake study before. Reading the actual study, I was less impressed. In brief, on two separate occasions, participants consumed a 380-calorie milkshake under the pretense that it was either a 620-calorie “indulgent” shake or a 140-calorie “sensible” shake. Ghrelin, a gut peptide mediating the sensation of hunger, was measured via intravenous blood samples at three time points: baseline, anticipatory, and postconsumption. During the first interval participants were asked to view and rate the (misleading) label of the shake. During the second interval (between 60 and 90 minutes later) participants were asked to drink and rate the milkshake. The mindset of indulgence was reported to produce a “dramatically steeper” decline in ghrelin after consuming the shake, whereas the mindset of sensibility produced a relatively flat ghrelin response. Looking at the paper, I saw several problems. First, although the participants didn’t know that both shakes were the same, it’s not clear whether the investigators were blinded at each session. Second, the “dramatically steeper decline” is less dramatic than presented. There are no error bars on the “money graph” to show variability, and the p-value of the repeated measures effect was only 0.04. Third, the authors did some truly annoying data presentation, showing the X-axis only between 880 and 960 pg/ml, with the final values differing only by around 20 pg/ml, or around 2.2%. In other words, this was a small study with a small effect that was barely statistically significant. Quite underwhelming, and the authors barely mentioned placebo effects, although in the last paragraph they did liken their findings to them.
What about the next study, the one about the athletes and altitude training? It took some more work to find out what Greenberg was talking about, and I’m still not sure that I have the right study (damn you, NYTM, again!), but it appears to be this one testing the “live high train low” (LHTL) paradigm used for altitude training in athletes. Sixteen endurance cyclists trained for eight weeks at low altitude and then, after a two-week lead-in, athletes spent 16 hours a day for four weeks in rooms with either normal air or normobaric hypoxia corresponding to an altitude of 3,000 meters. Neither the subjects nor the scientists knew which group was “living high”. On five occasions before, during, and after the four weeks, the subjects underwent a whole series of performance and physiological tests. Basically, there was no difference between the two groups. The authors speculated that it was likely placebo effects in previous studies that suggested that (LHTL) improved performance, leading some to conclude that altitude training is little more than placebo. Again, I’m not sure I got the right study, but without a reference I can never quite be sure.
As for the others, these are conventional studies that we’ve discussed time and time again about how surgery is particularly prone to placebo effects and how higher expense and fancier names of drugs increase placebo effects. Then, unsurprisingly, Greenberg buys into the “placebo without deception” narrative:
You don’t even have to deceive the patients. You can hand a patient with irritable bowel syndrome a sugar pill, identify it as such and tell her that sugar pills are known to be effective when used as placebos, and she will get better, especially if you take the time to deliver that message with warmth and close attention.
No. Just no. Really, no. As we’ve discussed numerous times, whenever we’ve examined a study that claims to find placebo effects without deception, we have always—always!—been able to identify unacknowledged deception in the experimental design, primarily through the hyping up of “powerful placebo effects.” There is no placebo without deception, at least none that anyone has ever been able to show, no matter how much Ted Kaptchuk, who’s been a frequent topic here on this blog and is the co-star of Greenberg’s article, tries to argue otherwise.
Greenberg’s history of placebo effects is quite…odd…as well. He goes back to Franz Anton Mesmer and attributes the poor reputation of placeboes to the poor reputation of Mesmer and Mesmerism, which Mesmer ascribed to “animal magnetism” that he could use to affect patients. Basically, Mesmer seemed to be able to cure symptoms that no other doctor could, simply by manipulating “animal magnetism”. (I know, this is a simplistic portrayal of the history, but Mesmerism is not the main point of the post.) It turns out that Benjamin Franklin was in Paris at the time Mesmer was at his maximal fame and King Louis XVI was receiving complaints. Louis appointed a commission headed by Franklin to investigate:
To the Franklin commission, the question wasn’t whether Mesmer was a fraud and his patients were dupes. Everyone could be acting in good faith, but belief alone did not prove that the magnetism was at work. To settle this question, they designed a series of trials that ruled out possible causes of the observed effects other than animal magnetism. The most likely confounding variable, they thought, was some faculty of mind that made people behave as they did under Mesmer’s ministrations. To rule this out, the panel settled upon a simple method: a blindfold. Over a period of a few months, they ran a series of experiments that tested whether people experienced the effects of animal magnetism even when they couldn’t see.
It was possibly the first-ever blinded experiment, and it soundly proved what scientists today call the null hypothesis: There was no causal connection between the behavior of the doctor and the response of the patients, which meant, as Franklin’s panel put it in their report, that “this agent, this fluid, has no existence.” That didn’t imply that people were pretending to twitch or cry out, or lying when they said they felt better; only that their behavior wasn’t a result of this nonexistent force. Rather, the panel wrote, “the imagination singly produces all the effects attributed to the magnetism.”
It is, of course, fascinating history, but it’s not the reason why placebo effects are so “disreputable”. Rather, it’s how throughout the last several decades promoters of unscientific medicine have been using placebo effects as a justification for how rank quackery supposedly “works.” Greenberg is, however, correct that the modern appreciation of placebo effects began in earnest in 1955 when Harvard surgeon Henry Beecher lectured about placebo effects at the meeting of the American Medical Association, leading to a new calculus in which drugs had to be tested against placebo controls in order to determine if they actually were efficacious or not.
Which brings us, of course, to Ted Kaptchuk.
“I don’t love science”
I’ll give Greenberg credit for revealing something about Ted Kaptchuk, the Harvard “scientist” who’s become the guru of placebo medicine science. Kimball Atwood wrote extensively about him early in the history of this blog in an epic multipart series (part 1, 2, 2.1, 2.2, 2.3) showing how he is a believer in all manner of quackery, but especially acupuncture. He is also the foremost promoter of the “placebo without deception” narrative. In Greenberg’s piece, he comes across as only grudgingly accepting science. For example:
When Ted Kaptchuk was asked to give the opening keynote address at the conference in Leiden, he contemplated committing the gravest heresy imaginable: kicking off the inaugural gathering of the Society for Interdisciplinary Placebo Studies by declaring that there was no such thing as the placebo effect. When he broached this provocation in conversation with me not long before the conference, it became clear that his point harked directly back to Franklin: that the topic he and his colleagues studied was created by the scientific establishment, and only in order to exclude it — which means that they are always playing on hostile terrain. Science is “designed to get rid of the husks and find the kernels,” he told me. Much can be lost in the threshing — in particular, Kaptchuk sometimes worries, the rituals embedded in the doctor-patient encounter that he thinks are fundamental to the placebo effect, and that he believes embody an aspect of medicine that has disappeared as scientists and doctors pursue the course laid by Franklin’s commission. “Medical care is a moral act,” he says, in which a suffering person puts his or her fate in the hands of a trusted healer.
“I don’t love science,” Kaptchuk told me. “I want to know what heals people.” Science may not be the only way to understand illness and healing, but it is the established way. “That’s where the power is,” Kaptchuk says. That instinct is why he left his position as director of a pain clinic in 1990 to join Harvard — and it’s why he was delighted when, in 2010, he was contacted by Kathryn Hall, a molecular biologist. Here was someone with an interest in his topic who was also an expert in molecules, and who might serve as an emissary to help usher the placebo into the medical establishment.
“Science may not be the only way to understand illness and healing”? Maybe so, but it is the only established way to figure out how to diagnose illness and to determine what is and isn’t effective in treating illness. Look at the passage above, and it’s clear that Kaptchuk isn’t interested in science other than to use it the way a drunk uses a lamppost, for support rather than illumination. Just look at how bitter he is that placebo effects were supposedly “created in order to exclude” them, which is not exactly true. When placebo effects were appreciated, it became clear that if you want to know how much of an effect is due to a drug you need to take placebo effects into account. That’s a different thing.
In any event, Kaptchuk decided that he had to find molecular pathways to explain placebo effects if he ever wanted to see them taken seriously. So he teamed up with Kathryn Hall. Yet, later in the article, when Hall had seemingly found an enzyme that might impact placebo effects, he wasn’t as happy as you would expect him to be:
But Kaptchuk also has a deeper unease about Hall’s discovery. The placebo effect can’t be totally reduced to its molecules, he feels certain — and while research like Hall’s will surely enhance its credibility, he also sees a risk in playing his game on scientific turf. “Once you start measuring the placebo effect in a quantitative way,” he says, “you’re transforming it to be something other than what it is. You suck out what was previously there and turn it into science.” Reduced to its molecules, he fears, the placebo effect may become “yet another thing on the conveyor belt of routinized care.”
So what does Kaptchuk want? For placebo effects to be taken more seriously and for scientists to understand them better or not? And what is Hall’s discovery anyway. Enter catechol-O-methyltransferase (COMT).
Catechol-O-methyltransferase (COMT): The placebo enzyme?
Before I describe what I found perusing the scientific literature, I’ll let Greenberg tell the tale of catechol-O-methyltransferase (COMT), the enzyme that Hall believes is a key mediator of placebo effects:
When Hall contacted him, she seemed like a perfect addition to the team he was assembling to do just that. He even had an idea of exactly how she could help. In the course of conducting the study, Kaptchuk had taken DNA samples from subjects in hopes of finding some molecular pattern among the responses. This was an investigation tailor-made to Hall’s expertise, and she agreed to take it on. Of course, the genome is vast, and it was hard to know where to begin — until, she says, she and Kaptchuk attended a talk in which a colleague presented evidence that an enzyme called COMT affected people’s response to pain and painkillers. Levels of that enzyme, Hall already knew, were also correlated with Parkinson’s disease, depression and schizophrenia, and in clinical trials people with those conditions had shown a strong placebo response. When they heard that COMT was also correlated with pain response — another area with significant placebo effects — Hall recalls, “Ted and I looked at each other and were like: ‘That’s it! That’s it!’ ”
Notice something right away. Instead of going into this study without preconceptions, looking at potential genomic determinants of placebo response without bias, Hall and Kaptchuk had identified an enzyme that, conveniently enough, metabolizes norepinephrine, epinephrine (a.k.a. adrenaline), and dopamine. These molecules are already known to be involved with responses to stress, as well as with reward and good feeling (e.g., dopamine). COMT has also been implicated in quite a few diseases and conditions, including hypertension, preeclampsia, cardiovascular disease, psychiatric disorders, cancer, and chronic fatigue syndrome (CFS). But that’s not all. COMT can also modify clinical response to both active drugs in randomized clinical trials.
What they claim to have found is this:
It is not possible to assay levels of COMT directly in a living brain, but there is a snippet of the genome called rs4680 that governs the production of the enzyme, and that varies from one person to another: One variant predicts low levels of COMT, while another predicts high levels. When Hall analyzed the I.B.S. patients’ DNA, she found a distinct trend. Those with the high-COMT variant had the weakest placebo responses, and those with the opposite variant had the strongest. These effects were compounded by the amount of interaction each patient got: For instance, low-COMT, high-interaction patients fared best of all, but the low-COMT subjects who were placed in the no-treatment group did worse than the other genotypes in that group. They were, in other words, more sensitive to the impact of the relationship with the healer.
There are a lot of assumptions in this narrative, which, no doubt, Greenberg got straight from Kaptchuk and Hall, It’s also rather telling how, when it has something to do with something that’s not well supported by science (some would add the word “yet” here, but I’m not so sure…yet) The NYTM doesn’t insist on telling “both sides.” Greenberg’s entire narrative appears to come from primarily two sources, Kaptchuk and Hall, with no one expressing a skeptical viewpoint.
Be that as it may, let’s take a look at one of the studies by Hall and Kaptchuk that purports to show that COMT activity affects placebo effects. Basically, they took an old study in which they examined components of the placebo effect through a clinical trial in which patients were randomized into three groups: (1) no-treatment control (“waitlist”); (2) placebo acupuncture (“limited”); (3) placebo acupuncture plus a supportive patient-provider (“augmented”). We’ve written about this study before. This was actually an interesting study in that it identified the practitioner-patient relationship as the most important component of placebo effects. However, it had some issues, too. For example, Steve noted that the waitlist group showed considerable improvement on the severity scale as well and that almost 30% of the subjects reported adequate relief of symptoms—even though on average there was no significant change, further noting that this is important for understanding placebo effects because it means doing nothing but entering a study will create the appearance of benefit for about 1/3 of subjects. From my perspective, this is likely an artifact of randomized clinical trials.
But what about the new study? Out of 262 subjects in the original study, 112 gave consent for genetic screening, and a total of 10 more were excluded for various reasons, such as missing data, leaving 102 samples to be examined. Genomic DNA was extracted from whole blood and analyzed for the COMT SNP rs633. (SNP stands for single-nucleotide polymorphism and are the most common type of genetic variation among people. Each SNP represents a difference in a single DNA building block, called a nucleotide. For example, a SNP may replace the nucleotide cytosine with the nucleotide thymine in a certain stretch of DNA.) Authors were able to use this SNP to determine the presence of specific alleles in the COMT gene and reported that one allele was linearly related to placebo response as measured by changes in the IBS severity score (p = .035).
Here are the problems with this study. First, the original study didn’t show results that were that robust. Second, the authors didn’t even measure rs633 for all or even most of the subjects. Third, the correlation found in this study, although statistically significant, was not all that impressive and one has to wonder if it’s clinically significant. Indeed, I notice a similar pattern with other COMT studies, such as this one, which purports to show that genetic variation in COMT modifies effects of clonidine treatment in chronic fatigue syndrome. Examining other such studies, Hall is quoted in Greenberg’s article thusly:
…Hall argues, what’s important isn’t the direction of the effect, but rather that there is an effect, one that varies depending on genotype — and that the same gene variant also seems to determine the relative effectiveness of the drug. This outcome contradicts the logic underlying clinical trials. It suggests that placebo and drug do not involve separate processes, one psychological and the other physical, that add up to the overall effectiveness of the treatment; rather, they may both operate on the same biochemical pathway — the one governed in part by the COMT gene.
This, of course, should not be surprising if true. In fact, I’d find it surprising if molecular pathways governing drug effects didn’t have considerable overlap or crosstalk with molecular pathways governing placebo effects. This is particularly true given that COMT is involved in so many pathways influencing so many diseases.
Which brings us to…the placebome!
The magical mystical placebome
At this point, I can’t help but mention that I get annoyed at how scientists have put the suffix “-ome” after everything. It started with “genome”, which means all the genes in the cell. Now, however, we have metabolome, transcriptome, epigenome, and so many others. As they always do with any medical or scientific concept, alternative medicine practitioners have glommed on to “-omics” to the point of producing what I like to call “woo-omics.”
The problem with all these “omics” is that they are hideously complicated, with interactions of thousands of genes, proteins, and other entities that must be made sense of in order to understand what is going on. Indeed, arguably the reason we never bothered with these sorts of analyses before is that, until the last 10-20 years quite simply they were impossible. The computing power and algorithms necessary to do them simply didn’t exist and had to be developed. Neither did the technology. Then, beginning in the late 1990s, techniques were developed to measure expression profiles that included every known gene in the human genome. Building on techniques developed for the Human Genome Project and other genomics initiatives, in the early 2000s, we had cDNA microarrays, the ability to scan thousands of single nucleotide polymorphisms (SNPs) and look for associations with diseases, and the like. Systems biology started to come into its own, wherein biology was studied not so much at the level of the single gene and protein but by looking at expression and activity data and constructing networks. The result of the new systems biology and “omics” has been a torrential flood of data that’s far ahead of our ability to analyze it fully.
So naturally, Hall and Kaptchuk want in on all this -omics goodness, which led to:
The discovery of this genetic correlation to placebo response set Hall off on a continuing effort to identify the biochemical ensemble she calls the placebome — the term reflecting her belief that it will one day take its place among the other important “-omes” of medical science, from the genome to the microbiome. The rs4680 gene snippet is one of a group that governs the production of COMT, and COMT is one of a number of enzymes that determine levels of catecholamines, a group of brain chemicals that includes dopamine and epinephrine. (Low COMT tends to mean higher levels of dopamine, and vice versa.)
Because of course she wanted to create a new “-ome.” And what is the placebome? This:
Hall has begun to think that the placebome will wind up essentially being a chemical pathway along which healing signals travel — and not only to the mind, as an experience of feeling better, but also to the body. This pathway may be where the brain translates the act of caring into physical healing, turning on the biological processes that relieve pain, reduce inflammation and promote health, especially in chronic and stress-related illnesses — like irritable bowel syndrome and some heart diseases. If the brain employs this same pathway in response to drugs and placebos, then of course it is possible that they might work together, like convoys of drafting trucks, to traverse the territory. But it is also possible that they will encroach on one another, that there will be traffic jams in the pathway.
Yes, maybe. But first it has to be shown that there even is such a thing as the “placebome.” That has yet to be demonstrated convincingly. What has been demonstrated is that COMT alleles can be associated with differences in drug effects and might—at best, might—be associated with more response to placebo. It’s a lot of correlation, and no evidence for causation. Desperately needed is evidence that there is a distinct pathway associated with placebo effects that justifies a new name like “placebome” and that COMT activity is more than just correlated with differences in placebo responsiveness. (Baby steps. You have to walk before you can run.)
The bottom line is that this is new, interesting science. Unfortunately, it is being used for an ideological message in promotion of what can only be called quackery. Even worse, The NYTM didn’t even show an ounce of skepticism and acknowledge that.