Shares

A new study published in PLOS Biology looks at the potential magnitude and effect of publication bias in animal trials. Essentially, the authors conclude that there is a significant file-drawer effect – failure to publish negative studies – with animal studies and this impacts the translation of animal research to human clinical trials.

SBM is greatly concerned with the technology of medical science. On one level, the methods of  individual studies need to be closely analyzed for rigor and bias. But we also go to great pains to dispel the myth that individual studies can tell us much about the practice of medicine.

Reliable conclusions come from interpreting the literature as a whole, and not just individual studies. Further, the whole of the literature is greater than the sum of individual studies – there are patterns and effects in the literature itself that need to be considered.

One big effect is the file-drawer effect, or publication bias – the tendency to publish positive studies more than negative studies. A study showing that a treatment works or has potential is often seen as doing more for the reputation of a journal and the careers of the scientists than negative studies. So studies with no measurable effect tend to languish unpublished.

Individual studies looking at an ineffective treatment (if we assume perfect research methodology) should vary around no net effect.  If those studies that are positive at random are more likely to be published than those studies that are neutral or negative, than any systematic review of the published literature is likely to find a falsely positive effect.

Of course, we do not live in a perfect world and many studies have imperfect methods and even hidden biases. So in reality there is likely to be a positive bias to the studies.  This positive bias magnifies the positive publication bias.

There are attempts in the works to mitigate the problem of publication bias in the clinical literature. For example, clinicaltrials.gov is a registry of all trials involving human subjects – before the trials are completed and the results known. This way reviewers can have access to all the data – not just the data researchers and journal editors deem worthy.

This new study seeks to explore if publication bias is similarly a problem with animal studies. The issues are similar to human trials. There is an ethical question, as sacrificing animals in research is justified by the data we get in return. If that data is hidden and does not become part of the published record, than the animals were sacrificed for nothing.

And also, publication bias can lead to false conclusions. This in turn can, for example, lead to clinical trials of a drug that seems promising in animal studies. This could potentially expose human subjects to a harmful or just worthless drug that would not have made it to human trials if all the negative animal data were published.

The study itself looked at a database of animal models of stroke. They examined 525 publications involving 16 different stroke interventions. There are a few different types of statistical analysis that can be done to infer probable publication bias. Basically, without publication bias there should be a certain distribution of findings in terms of effect sizes. If only positive or larger effect sizes are being published, then the distribution will be skewed.

This type of analysis provides an estimation only. They found that:

Egger regression and trim-and-fill analysis suggested that publication bias was highly prevalent (present in the literature for 16 and ten interventions, respectively) in animal studies modelling stroke. Trim-and-fill analysis suggested that publication bias might account for around one-third of the efficacy reported in systematic reviews, with reported efficacy falling from 31.3% to 23.8% after adjustment for publication bias. We estimate that a further 214 experiments (in addition to the 1,359 identified through rigorous systematic review; non publication rate 14%) have been conducted but not reported. It is probable that publication bias has an important impact in other animal disease models, and more broadly in the life sciences.

So there was some disagreement between the methods used, but both showed that there is likely to be a significant publication bias. If their analysis is correct, about one third of systematic reviews of animal studies in stroke that conclude an intervention works may be due to publication bias rather than a real effect. The authors also speculate that this effect is likely not unique to stroke, and may be generalizable to animal studies in general.

Of course, this is just an individual study, and further analysis using different data sets are needed to confirm these results.

Conclusion

The results of this study are not surprising and are in line with what is known from examining clinical trials. They suggest that similar methods to minimize publication bias are necessary for animal studies in addition to human trials.

Hopefully, this kind of self-critical analysis will lead to improvement in the technology of medical research. It should further lead to more caution in interpreting not only single studies but systematic reviews.

Also, in my opinion, it highlights the need to consider basic science and plausibility in evaluating animal and clinical trials.

Shares

Author

Posted by Steven Novella