A recent meta-analysis of the most commonly prescribed antidepressant drugs raises some very important questions for science-based medicine. The study: Initial Severity and Antidepressant Benefits: A Meta-Analysis of Data Submitted to the Food and Drug Administration, was conducted by Irving Kirsch and colleagues, who reviewed clinical trials of six antidepressants (fluoxetine, venlafaxine, nefazodone, paroxetine, sertraline, and citalopram). They looked at all studies submitted to the FDA prior to approval, whether published or unpublished. They found:

Drug–placebo differences in antidepressant efficacy increase as a function of baseline severity, but are relatively small even for severely depressed patients. The relationship between initial severity and antidepressant efficacy is attributable to decreased responsiveness to placebo among very severely depressed patients, rather than to increased responsiveness to medication.

The press has largely reported this study as showing that “antidepressants don’t work” but the full story is more complex. This analysis certainly has important implications for how we should view the body of evidence for these antidepressants. It also illuminates the possible role of publication bias in the body of scientific literature – something that has far ranging implications for science-based medicine.

The Strengths and Weaknesses of this study

The primary strength of this analysis is that it looked at all clinical trials for these antidepressants submitted to the FDA prior to approval – whether or not the studies were published. The authors did this specifically to eliminate the effect of publication bias on the data. In pooling the data together using meta-analysis statistical methods the study also able to have much more power than a single study (the analysis included 5,133 total subjects). The study design also allowed for comparison of antidepressant efficacy for differing initial severity of depression – an important question for determining who should and should not be treated with antidepressants.

The study has numerous weaknesses, however. Because the study only looked at pre-approval clinical trials it did not account for all available data. Also, once a drug is approved study designs are more variable as they are no longer specifically designed to meet the criteria for FDA approval and may be more relevant to clinical practice. The analysis only considered a single measure of depression (Hamilton Rating Scale of Depression – HAM-D), and it is possible that other measures of depression may have yielded a different result. The study also included data mainly for severe depression, with only one study of moderate depression and now studies of mild depression. Finally it should be noted that meta-analyses in general are not highly predictive of the outcomes of later large definitive trials. The process of performing a meta-analysis itself has the potential to introduce bias and error.

What does this study mean?

This, of course, if the ultimate question for clinicians and the public – how should the results of this meta-analysis be incorporated into the current practice of medicine? This is a complex question, and can only be answered in the context of all available evidence on the use of antidepressants in the treatment of depression. Not all studies and not all systematic reviews and meta-analyses of the use of antidepressants in depression give us the same results. For example, this recent review found:

Paroxetine yielded consistently and significantly better remission (rate difference [RD]: 10% [95% CI = 6 to 14]), clinical response (RD: 17% [95% CI = 7 to 27]), and symptom reduction (effect size: 0.2 [95% CI = 0.1 to 0.3]) than placebo.

This review looked at multiple outcome measures, and pre and post approval trials, but only considered published studies. Therefore it had different strengths and weaknesses than the current review, and further study is needed to determine how best to interpret these conflicting results.

At this time it is premature to conclude that modern antidepressant medications do not work. There is sufficient evidence for efficacy to continue to use medication as part of the overall treatment approach to depression. The current consensus is that therapy is also a critical component of the long term treatment of depression, and therefore looking at the use of medications in isolation may not reflect their actual clinical use. Multiple studies have now shown that combination treatment (medications and therapy) are better than either alone. There is also evidence that medication treatment is more successful when multiple agents are tried in order to find the optimal treatment. These so-called “real world” treatments are not well reflected in the pre-approval trials considered in this analysis.

But this analysis does indicate that perhaps clinicians should consider alternatives to medication for patients with mild to moderate depression, and reserve medication for more severe patients or those who do not respond to non-medication modalities alone. It also emphasizes (together with other evidence) the need for combination therapy in many patients.

It is also important to note that depression is only one of many clinical uses for antidepressant medications. Other uses include anxiety, panic disorder, neuropathic pain, and eating disorders. Each indication should be considered on its own merits.

The role of publication bias

This meta-analysis is important for what it tells us about the role of publication bias, a term that refers to the tendency for researchers to be more likely to submit, and editors to publish, papers with positive results than with negative results. Therefore any review of the published literature is not a reflection of all evidence, but of a subset of the evidence biased toward positive outcomes. With regard to clinical trials of antidepressants, the pharmaceutical company sponsors of those studies had to report them to the FDA (and therefore they were accessible for this review) but they tended to publish only the positive studies. This is also not the only review to find such a bias. As my colleague, Kimball Atwood, has previously discussedTurner et. al. looked at trials submitted to the FDA for 12 antidepressants and found:

According to the published literature, it appeared that 94% of the trials conducted were positive. By contrast, the FDA analysis showed that 51% were positive.

We cannot determine whether the bias observed resulted from a failure to submit manuscripts on the part of authors and sponsors, from decisions by journal editors and reviewers not to publish, or both. Selective reporting of clinical trial results may have adverse consequences for researchers, study participants, health care professionals, and patients.

This effect is also not limited to clinical trials of antidepressants, or even to pharmaceutical trials, but plagues all published medical research. Publication bias distorts the body of evidence upon which science-based medicine is dependent. It is a particular problem for systematic reviews and meta-analysis. It even has the potential for creating the impression that a clinical effect exists where it does not.

One way to compensate for the effect of publication bias is to rely more on large, definitive, high profile clinical trials. Such trials are usually the result of a consensus that emerges from a body of literature, with both proponents and skeptics agreeing on the study design, taking into account the results of previous weaker studies. Such trials tend to high profile, meaning that the results cannot be hidden away once the study is concluded. Large individual trials are therefore not subject to publication bias.

But we need to develop further strategies to minimize the effect of publication bias on the literature. One strategy would be to require that all trials involving human subjects be registered in a central database. The results of all human trials could therefore be accessible to researchers, whether or not the results were ever published. Such a system already exists for FDA drug trials, but should be expanded to include all human trials. Also, researchers should be encouraged to submit studies for publication, even when they are negative. And journal editors should make efforts to publish negative results, and not favor positive results that are more likely to grab headlines.

Posted by Steven Novella

Founder and currently Executive Editor of Science-Based Medicine Steven Novella, MD is an academic clinical neurologist at the Yale University School of Medicine. He is also the president and co-founder of the New England Skeptical Society, the host and producer of the popular weekly science podcast, The Skeptics’ Guide to the Universe, and the author of the NeuroLogicaBlog, a daily blog that covers news and issues in neuroscience, but also general science, scientific skepticism, philosophy of science, critical thinking, and the intersection of science with the media and society. Dr. Novella also contributes every Sunday to The Rogues Gallery, the official blog of the SGU.