Shares

About three weeks ago, ironically enough, right around the time of TAM 9, the New England Journal of Medicine (NEJM) inadvertently provided us in the form of a new study on asthma and placebo effects not only material for our discussion panel on placebo effects but material for multiple posts, including one by me, one by Kimball Atwood, and one by Peter Lipson, the latter two of whom tried to point out that the sorts of uses of these results could result in patients dying. Meanwhile, Mark Crislip, in his ever-inimitable fashion, discussed the study as well, using it to liken complementary and alternative medicine (CAM) as the “beer goggles of medicine,” a line I totally plan on stealing. The study itself, we all agreed, was actually pretty well done. What it showed is that in asthma a patient’s subjective assessment of how well he’s doing is a poor guide to how well his lungs are actually doing from an objective, functional standpoint. For the most part, the authors came to this conclusion as well, although their hedging and hawing over their results made almost palpable their disappointment that their chosen placebos utterly failed to produce anything resembling an objective response improving lung function as measured by changes (or lack thereof) in FEV1.

In actuality, where most of our criticism landed, and landed hard—deservedly, in my opinion—was on the accompanying editorial, written by Dr. Daniel Moerman, an emeritus professor of anthropology at the University of Michigan-Dearborn. There was a time when I thought that anthropologists might have a lot to tell us about how we practice medicine, and maybe they actually do. Unfortunately, my opinion in this matter has been considerably soured by much of what I’ve read when anthropologists try to dabble in medicine. Recently, I became aware that Moerman appeared on the Clinical Conversations podcast around the time his editorial was published, and, even though the podcast is less than 18 minutes long, Moerman’s appearance in the podcast provides a rich vein of material to mine regarding what, exactly, placebo effects are or are not, not to mention evidence that Dr. Moerman appears to like to make like Humpty-Dumpty in this passage:

‘When I use a word,’ Humpty Dumpty said, in rather a scornful tone, ‘it means just what I choose it to mean — neither more nor less.’

‘The question is,’ said Alice, ‘whether you can make words mean so many different things.’

‘The question is,’ said Humpty Dumpty, ‘which is to be master — that’s all.’

Let’s dig in, shall we?

The interviewer, Joe Elia, begins by framing the question of the significance of the NEJM placebo/asthma study as asking what matters more, subjective responses of patients or objective responses? Right off the bat, this is a problem for several reasons, the most glaring of which is that it’s a false dichotomy. Both matter, but for different diseases and conditions one can matter more than the other. For example, in the asthma study, as all the SBM bloggers who wrote about it pointed out, objective measures matter a lot. If, for example, a patient with asthma has a very low FEV1, he might still feel OK or have only mild shortness of breath and yet still be just a tiny push away from total respiratory collapse. Another example that comes to mind is diabetes, particularly type I diabetes. Before we had an effective treatment in the form of injected insulin to restore blood glucose levels to something resembling normal, many diabetics, other than symptoms such as thirst and frequent urination, felt more or less fine. Yet they could easily be just a piece a cake away from diabetic ketoacidosis. In such conditions, objective improvement matters, and it matters a lot—far more than subjective symptoms. That doesn’t mean that subjective symptoms aren’t important, but concentrating on the subjective and dismissing the objective can be dangerous. Moerman, not being a physician, seems not to recognize this and doesn’t even address the issue. Indeed, he seems blithely unaware that relying on placebo responses in diseases that produce a real, life-threatening physiological derangement is the way to kill at least a few patients. But they’ll feel great—until right before they crump.

Elia asks Moerman right off the bat what he sees in medical studies such as the NEJM placebo study that’s common to other human situations. Moerman responds:

…I see actors and responders. I see uniforms. I see symbols of power. I see authoritarian and all sorts of other kinds of interactions between people. I see lots of interactions between people. I see lots and lots and lots of meaning.

And I see dead people. (Sorry, couldn’t resist.)

Time and time again, Moerman returns to this word, “meaning.” But what does he—if you’ll excuse the awkward sentence construction—mean when he uses the word “meaning”? Elia asks him just that question, pointing out that the word featured prominently in the title of his book Medicine, Meaning and the “Placebo Effect”. Moerman responds with a bit of a waffle dance before he tries to actually answer the question:

…given that we’re talking to a bunch of physicians, let me start by saying why it is I put “placebo effect” in quotation marks. What we mean ordinarily by “placebo effect” is unproblematic. It’s an inert substance designed to mimic a medical procedure. The key thing is that it’s inert. If it’s inert, what that means is, it can’t do anything. That’s what “inert” means. But there simply can’t be such a thing as a placebo effect. It’s a contradiction in terms, sort of like “king of America.” So, I think that “placebo effect” is like “king of America.” It doesn’t exist. Now, at the same time we all know that if you give people inert medications they often respond dramatically, and they get a lot better. So, the only thing that we know for sure is that it’s not the placebo that did it. So what did do it? And what I argue is that what did it is all of the other meaningful stuff that’s associated with medicine, starting with the behavior of the parking lot attendant, going through the receptionist, to what’s hanging on the walls to the art in hospital. I said in the article, our hospital has two helipads.

When you walk into a place like that you know you’re in a place of great overweaning power. It’s incredibly meaningful. And I would argue that that meaning, that and all sorts of other kinds of meaning—the stethoscope around the neck, the uniforms, the funny white shoes, you know, on and on and on—all of that stuff goes together to create a generic system of meaning which is then sort of instantiated by the specific red or orange or blue pills that the doctor gives you and tells you when to take it this way and that way and to drink lots of water, which is a healing substance all of its own. And the meaning that’s attached to all of that stuff can be at least as powerful as whatever is in the pill, whether it’s inert or not.

Alright, I’ll give Moerman credit for a bit of a sense of humor. That line about his hospital having two helipads wasn’t half-bad. Of course, back when I was doing residency in Cleveland, our county hospital had three helipads. So there. (Actually, the reason it had three helipads is because it was the main base for Metro LifeFlight, where I actually moonlighted as a flight physician for nearly three years while I was in graduate school.) In any event, Moerman seems to miss a huge point. He seems to be arguing that placebo effects come from the atmosphere of medicine; i.e., the lab coats, the halls of “power,” the helipads, the medical jargon, the mysterious language that only medical personnel (the high priests or shamans of whom are, presumably, the doctors) can understand. Here’s the problem. In the NEJM article, the patients in the no-treatment, “watchful waiting” group in the asthma/placebo study experienced all of that medical awesomeness, yet they didn’t feel better. They only felt better after they got either active treatment or placebo treatment. In fact, all that medical awesomeness didn’t affect them very much at all. True, even some of those who received no treatment at all reported feeling better, but that’s not uncommon in a clinical trial, and it was a far fewer number who spontaneously felt better than those who were treated with an albuterol inhaler or placebo treatments. In this study at least, the aura of medicine didn’t do much compared to the actual placebo intervention. Moerman completely missed the point here.

He does a bit better, although not a lot, in one of his articles from 2002 to which he refers in his interview entitled Deconstructing the Placebo Effect and Finding the Meaning Response. After listing studies in which, for example, medical students reported feeling a stimulant response after taking a red placebo and a sedative response after taking a blue placebo; people with headache reported more pain relief after taking a branded aspirin as compared to aspirin in a plain bottle and after placebo aspirin in the same branded bottle compared to placebo in a plain bottle; and it was found that people who were told that exercise would improve their psychological—surprise! surprise!—reported that exercise improved their psychological well-being. In the article, he also tries to have it both ways. While arguing time and time again that placebos, because they are inert, can’t do anything, he takes pains to point out that placebo responses leading to pain relief can be blocked by an opiate antagonist, naloxone, concluding, rather disingenuously in my opinion, “To say that a treatment such as acupuncture ‘isn’t better than placebo’ does not mean that it does nothing.” This is, of course, a massive straw man. If, as Mark Crislip jokes, placebo effects due to CAM are the “beer goggles of medicine,” altering perceived pain and symptoms without actually affecting the underlying physiology, it is not surprising that the brain function might—oh, you know—actually change in response to placebo.

In the podcast, Moerman chooses two more recent studies to try to make his point—and misinterprets them both. First, he cites a famous article from 2009 in which patients were randomized to individualized acupuncture, standardized acupuncture, simulated acupuncture (twirling a toothpick against the skin), and usual care and makes exactly the same mistake interpreting it that CAM practitioners made in trying to promote the study. In essence, he concluded that because sham acupuncture (the toothpicks) did as well as “real acupuncture” and that both did better than usual care that acupuncture “works.” Wrong, wrong, wrong. Moerman then cites a famous German acupuncture study (the GERAC study, published in 2007) as evidence that acupuncture “works” as a “meaningful” intervention. Wrong, wrong, wrong, wrong, wrong as well. This latter study preselected patients with a long history of back pain whose pain didn’t respond well to standard treatment but who were naive to acupuncture. In other words, these studies do not show that “acupuncture works very well for low back pain, much better than standard care” (Moerman’s exact words). In actuality, they showed the exact opposite.

He then mentions a study on depression in which St. John’s wort, sertraline, and placebo all had similar results in depression and asks:

What do you conclude from that study? That nothing has any effect against depression because a placebo was involved. That doesn’t follow.

Actually, yes it does. It does indeed follow. Well, it doesn’t follow that nothing has any effect against depression; rather, it follows that in this study apparently neither sertraline nor St. John’s wort had any effect. This, by the way, appears to be the study to which Moerman referred. If this is the study, then it’s not entirely true that sertraline had no effect different from placebo; it only affected one of three measures of depression, but it demonstrated “much improvement” in that measure. Disappointing, but not “no effect,” and there were a number of potential explanations. The authors note that “Failure of established antidepressants to show such superiority occurs in up to 35% of trials, which illustrates the difficulties plaguing randomized placebo-controlled trials in this population.” They also noted that only 36% of the sertraline group had their dose maximized, pointing out that “if any protocol bias existed at all, it would favor hypericum [St. John’s wort], which could be dosed to the maximum of its permissible range, whereas the maximum permitted dose of sertraline was only 50% of its highest recommended amount.” So, in this study, it is reasonable to conclude that neither sertraline nor St. John’s wort “worked” in this population at this time at the doses used, but when the totality of evidence and the shortcomings of this trial are taken into account, sertraline does have an effect.

Another issue that Moerman completely ignores is that placebo responses might very well also be largely influenced by artifacts inherent in the structure of clinical trials. It’s not as though these issues haven’t been heavily studied, including expectancy effects (people are suggestible), observer effects (people often report improvement just from the process of being observed, also known as the Hawthorne effect), observer bias, training effects from repeated testing, and cheerleader effects from being encouraged. One wonders what Moerman would say about recent research, including an (in)famous NEJM meta-analysis and a recently updated Cochrane review, that suggest strongly that, when all these nonspecific effects and experimental biases are controlled for adequately, the placebo effect disappears. I think it’s worth quoting each briefly.

First, the NEJM:

…we found little evidence that placebos in general have powerful clinical effects. Placebos had no significant pooled effect on subjective or objective binary or continuous objective outcomes. We found significant effects of placebo on continuous subjective outcomes and for the treatment of pain but also bias related to larger effects in small trials. The use of placebo outside the aegis of a controlled, properly designed clinical trial cannot be recommended.

Then the Cochrane review:

We did not find that placebo interventions have important clinical effects in general. However, in certain settings placebo interventions can influence patient-reported outcomes, especially pain and nausea, though it is difficult to distinguish patient-reported effects of placebo from biased reporting. The effect on pain varied, even among trials with low risk of bias, from negligible to clinically important. Variations in the effect of placebo were partly explained by variations in how trials were conducted and how patients were informed.

Be that as it may, in a way Moerman (sort of) agrees with Crislip, just not in a way that supports his argument that the “meaning” behind placebos is this wonderful, powerful thing. Crislip makes a strong argument dismissing placebo effects as a myth. Moerman is dismissing placebo effects in a different manner, but in a way infused with his background as an anthropologist. He’s denying placebo effects by renaming them. In a way, they are (again, sort of) arguing the same thing. Crislip argues that placebo effects are an example of mild cognitive therapy in which the pain stays the same but it’s the perception of pain that changes. Moerman argues something similar, ascribing changes in pain perception to all trappings of “power” and interactions with health care providers in medical settings and the “meaning” that patients find in them. None of this is inconsistent with placebo responses being in actuality altered perceptions of symptoms. It’s just that Moerman seems to think that the “meaning” that alters these perceptions is far more powerful than it is. Unfortunately, while Crislip is rooted in hard-nosed “materialistic” science, Moerman seems more rooted in postmodern, relativistic thinking:

Practitioners can benefit clinically by conceptualizing this issue in terms of the meaning response rather than the placebo effect. Placebos are inert. You can’t do anything about them. For human beings, meaning is everything that placebos are not, richly alive and powerful. However, we know little of this power, although all clinicians have experienced it. One reason we are so ignorant is that, by focusing on placebos, we constantly have to address the moral and ethical issues of prescribing inert treatments (73, 74), of lying (75), and the like. It seems possible to evade the entire issue by simply avoiding placebos. One cannot, however, avoid meaning while engaging human beings. Even the most distant objects—the planet Venus, the stars in the constellation Orion—are meaningful to us, as well as to others (76).

One notes that reference #76 is a book by Timothy P. McCleary entitled, The Stars We Know: Crow Indian Astronomy and Lifeways. Perusing the information about the book, I see that the author states very early on that the purpose of his book was to “provide insight into a little known aspect of Crow culture—Crow ethnoastronomy. Ethnoastronomy, a fairly recent development in human sciences, attempts to elicit how non-Western peoples’ perceptions of cosmic phenomena are utilized in structuring behaviors, values, and mores.” All of this might be fascinating reading as far as learning about the history and culture of various peoples, but it would appear to stretch the bounds of what is a science and what it has to do with medicine I’m having a hard time grasping. It must be that reductionistic “Western” scientist in me. Is Moerman trying to say that because humans find “meaning” (whatever that means) in stars and constellations that placebos work? How would understanding “meaning” improve medicine above and beyond what we currently do to understand the effect of patient-provider interactions on health care delivery. Moerman either can’t or doesn’t specify, nor does he provide concrete examples of how his ideas would improve medicine. Maybe he does so in his book, but given that his article to which he referred is billed as the “abstract” or a “synopsis” of his book, somehow I doubt it. Worse, Moerman adds nothing new to the conversation, nor does he provide any testable hypotheses that would allow us to use his concept of “meaning” to better medical care by maximizing nonspecific effects as we use effective medicines.

The lack of specific examples aside, the problem remains for diseases for which there is a real derangement in physiology, such as asthma, diabetes, and the like. If placebo responses make the patient perceive his symptoms as being less severe, that doesn’t help the underlying pathophysiology or work to prevent the very real, very dangerous complications that can result from that pathophysiology. Again, nowhere in Moerman’s editorial or podcast do I see a recognition of that. What I do see is Moerman trying to make like Humpty-Dumpty and make the word “meaning” mean just what he chooses it to mean—neither more nor less, except that, now having read his NEJM editorial and his earlier paper and listened to his podcast interview, I’m still not sure he even knows what it’s supposed to mean.

The bottom line is that we as physicians are indeed called upon to relieve patients’ symptoms, but our obligation goes far beyond that. As physicians, we understand the pathophysiology of disease; we know the consequences of leaving a disease untreated. It is not enough for us to make the patient feel better. If that were the case, then there would be no reason not to give patients sedatives or stimulants for almost everything. Those certainly “make patients feel better”! But there are a lot of conditions where physiology trumps subjective complaints, or at least threatens to. Asthma, the topic of the NEJM placebo study from last month, is, of course, a classic example. A patient can be feeling fine (or at least not too bad) but be perilously close to a respiratory arrest. The same is true of diabetes, where a more or less asymptomatic patient can be on the verge of diabetic ketoacidosis. In these cases, our obligation as physicians is not just to make the patient feel better, but to make the patient better.

Shares

Author

Posted by David Gorski