Shares
From the Wikimedia Commons.

Pictured: Test subjects probably not worth a press release.

A recent study addresses the problem of sensationalism in the communication of science news, an issue we deal with on a regular basis. The study was titled “The association between exaggeration in health related science news and academic press releases: retrospective observational study“. The results show two interesting things – that university press releases frequently overhype the results of studies, and that this has a dramatic effect on overall reporting about the research.

The authors reviewed “Press releases (n=462) on biomedical and health related science issued by 20 leading UK universities in 2011, alongside their associated peer reviewed research papers and news stories (n=668).” They found that 40% of the press releases contained exaggerated health advice, 33% overemphasized the causal connection, and 36% exaggerated the ability to extrapolate animal and cell data to humans.”

Further:

When press releases contained such exaggeration, 58%, 81%, and 86% of news stories, respectively, contained similar exaggeration, compared with exaggeration rates of 17%, 18%, and 10% in news when the press releases were not exaggerated.

This study points a finger directly at academic press offices as a significant source of bad science news reporting. This does not let other links in the news chain off the hook, however.

The problem is worsened by changes over the last decade in the news infrastructure. The internet and changing business models make it difficult for large news outlets to maintain specialist journalists and editors. Therefore science and health news is more frequently being reported by generalist reporters, and not filtered through a dedicated science editor.

As a result, some outlets simply reprint science press releases without doing any independent investigation, without talking to experts who did not author the study, and without any ability to put the new research into a proper context. This is evident simply by searching on a science news item and finding dozens of websites publishing the exact same copy, word for word, which ultimately leads back to the press release.

Bad science news reporting that I have covered over the years has demonstrated problems at every level, sometimes even starting with the scientists themselves. In some cases the conclusion or discussion of a published study will contain speculation that goes well beyond the data. The discussion is a legitimate place to explore the implications of a study, but authors need to be clear about what conclusions flow from the study and what questions are prompted by the study but not addressed by its data.

Authors also should avoid the temptation to draw causal conclusions from correlations and associations. In fact, they need to do more than avoid making spurious conclusions themselves. The discussion is a great place to anticipate and discuss what the study cannot be used to conclude. It is expected that authors will discuss the limitations of a study, but often they can be more explicit and anticipate common abuses.

The BMJ study above discusses two very common ways in which studies are overhyped. The first is drawing causal conclusion from correlations. Correlations alone do not prove causation, but a pattern of correlation can suggest some causal relationships over others. This can often be a complex and nuanced evaluation, and study authors should do their best to put correlations into a proper context.

Another potential source of hype is extrapolating from one type of data to conclusions that the data does not directly address. In healthcare reporting the most common form of this problem is extrapolating from preclinical data to clinical conclusions. Preclinical data includes in-vitro studies looking at cell cultures, or animal data. Such evidence should only be used as a means of determining whether or not human clinical trials are likely to be safe and useful.

Also, there are different types of clinical studies, and often they are misrepresented. For example, pragmatic studies are used to compare the use of treatments which have already been established as effective in real-world clinical settings. They are not designed to test efficacy, because they are typically not blinded placebo comparisons. Yet it is not uncommon to use unblinded pragmatic studies to make efficacy claims.

The biggest problem, however, is reporting preliminary or exploratory data as if it is confirmatory. I have argued previously that preliminary data should not even be reported to the press, in most cases. Such data simply does not rise to the level of newsworthiness, as it is far more likely to be wrong than to pan out in later research. If preliminary data is communicated, it should come with a bold disclaimer about the preliminary nature of the study.

Sometimes the scientists themselves are innocent, but their university press office tries to find an angle that will promote the study by making it seem relevant to some headline-grabbing issue. Their job is to get press attention, and dry pre-clinical studies are unlikely to do the job. Therefore they turn to a number of common ploys to make any research seem relevant to a topic that is likely to grab media attention.

In the health arena, this means any study involving viruses might cure the common cold, any study involving cell metabolism might lead to a cure for cancer or obesity, and any study of brain cells might cure Alzheimer’s disease.

Sometimes a tiny footnote in the discussion is latched onto and presented as if it were the main focus of the study, even when the study actually has nothing to do with the topic.

Scientists are usually given the opportunity to review the university press release before it is made public. This means that scientists have an opportunity to be more active in making sure that their research is presented properly.

Of course, journalists and news outlets have responsibility for the news they report, and should not simply be reprinting press releases word-for-word. Science journalists need to at least read the original study and see if the results match what is being sold in the press release. Ideally they will also talk to independent experts to put the study into context.

Conclusion

The BMJ study is not surprising and fits in with my experience reporting science and health news over the last decade. I was a bit surprised at how much an effect the press releases were having on later reporting. This is actually good news, because it means that if scientists and university press offices were more responsible, then a significant amount of bad science news reporting can be avoided.

Scientists are press offices should follow some basic rules for good science news reporting:

  • Put a study into an overall scientific context
  • Do not emphasize minor aspects of the research simply because they are more exciting
  • Do not search for any health implication of pre-clinical data just for the headline, and certainly don’t make it seem like the focus of the study
  • Anticipate and explicitly address common misinterpretations of the research
  • Consider not reporting preliminary data at all, or at least clearly label preliminary data as such early in the press release and make is absolutely clear what this means
  • Do not report correlations as if they prove a specific causation
  • Be up front about all the limitations of a study and alternate interpretations
  • Make it clear when a study is an outlier, if there are multiple schools of thought, or where the study lies in relation to the current preponderance of expert opinion; in other words, do not present one small study as if it overturns a well-established consensus

In short, a good press release not only summarizes the results and conclusions of a study, but educates anyone reading it (journalists and the public) about the kind of research being done and how it fits into the overall scientific enterprise.

In my opinion, this falls into the broader mission of universities, which is education. They should not only be concerned about the education of their students, but also the public at large. A great deal of the education of the public about scientific matters (or other academic matters) is handled out of the university press office. Greater attention needs to be paid to their role in communicating science.


Image from the Wikimedia Commons.
 

 

Shares

Author

  • Steven Novella

    Founder and currently Executive Editor of Science-Based Medicine Steven Novella, MD is an academic clinical neurologist at the Yale University School of Medicine. He is also the host and producer of the popular weekly science podcast, The Skeptics’ Guide to the Universe, and the author of the NeuroLogicaBlog, a daily blog that covers news and issues in neuroscience, but also general science, scientific skepticism, philosophy of science, critical thinking, and the intersection of science with the media and society. Dr. Novella also has produced two courses with The Great Courses, and published a book on critical thinking - also called The Skeptics Guide to the Universe.

    View all posts

Posted by Steven Novella

Founder and currently Executive Editor of Science-Based Medicine Steven Novella, MD is an academic clinical neurologist at the Yale University School of Medicine. He is also the host and producer of the popular weekly science podcast, The Skeptics’ Guide to the Universe, and the author of the NeuroLogicaBlog, a daily blog that covers news and issues in neuroscience, but also general science, scientific skepticism, philosophy of science, critical thinking, and the intersection of science with the media and society. Dr. Novella also has produced two courses with The Great Courses, and published a book on critical thinking - also called The Skeptics Guide to the Universe.