Shares

Click on the thumbnail to link to the full comic

In the science communication world, perhaps especially in the subset that we self-identify as “scientific skepticism”, there is a lot of criticism of bad science reporting. The media often gets a lot of this criticism, and much of that is deserved. But various studies over the last decade or so have shown that journalists, while all too eager to participate, are often not the source of misreporting of science news. Much of it can be traced back to the press release, and even to study authors themselves.

A new study adds to this body of research, looking specifically the science article titles and their effect on journalistic coverage. Specifically the authors reviewed scientific studies about Alzheimer’s disease (AD). Much of the AD research is conducted in mouse models. There is no non-human animal that gets AD. While some primates and even dogs can develop plaques and other aspects of the disease, none get the full clinical disease complete with all the pathological findings and dementia. A lot of the basic science research, therefore, depends on animal models, which reflect some marker or aspect of the disease that may tell us something about the biology of AD itself.

This, of course, is a critical distinction. If a new intervention improves a mouse model of AD, that is obviously much less significant a scientific finding that if that same intervention improved AD in actual people. This finding can still be useful, but there are many steps between an animal model and the disease in humans.

What the authors of the new study looked at, therefore, was AD research on mouse models and whether the fact that they involved mice was mentioned in the title of the paper itself. They report:

To this end, we analyzed a sample of 623 open-access scientific papers indexed in PubMed in 2018 and 2019 that used mice either as models or as the biological source for experimental studies in AD research. We found a significant association (p < 0.01) between articles’ titles and news stories’ headlines, revealing that when authors omit the species in the paper’s title, writers of news stories tend to follow suit.

Specifically, if the article title declared that the study involved mice, than 46.2% of news headlines did also. If the title did not mention mice, then only 10.4% did. This is a very large difference that traces directly back to the published science article itself. This is also an extremely easy fix – journals should require that animal research declare the focus of their research in the title. But also notice that these results do not let journalists off the hook – in less than half of the news reporting about AD research in mice, the headlines failed to disclose this. Of course, journalists often don’t write their headlines, so much of the blame is on the headline writers, but this is all part of the news reporting and the outlet has ultimate responsibility.

The study also found that there was no significant difference between the two groups (declaring in the title that the study was in mice or not) in terms of how many resulted in at least one news story. They did, however, differ in the number of news stories generated, averaging 3.9 for non-declarative, and 3.0 for those that did declare the study was in mice. So the distorted news was more “newsworthy”.

While this is only one factor affecting the accuracy of science news reporting, it highlights that scientists themselves (and journal editors) take significant responsibility for the quality of reporting about their own research. This goes beyond crafting the title of the article. Prior research has also found that when researchers exaggerate their own research in the abstract or discussion of their paper this is likely to translate to exaggerated reporting.

In one study, for example, exaggerated claims were most often traced to the press release at the scientist’s institution.

40% (95% confidence interval 33% to 46%) of the press releases contained exaggerated advice, 33% (26% to 40%) contained exaggerated causal claims, and 36% (28% to 46%) contained exaggerated inference to humans from animal research. When press releases contained such exaggeration, 58% (95% confidence interval 48% to 68%), 81% (70% to 93%), and 86% (77% to 95%) of news stories, respectively, contained similar exaggeration, compared with exaggeration rates of 17% (10% to 24%), 18% (9% to 27%), and 10% (0% to 19%) in news when the press releases were not exaggerated.

To cut through those numbers a bit, exaggeration of advice, causal claims, and inference to humans was 58%, 81%, 86% respectively when contained in the press release, and 17%,18%,10% when not. That’s a huge difference. Press releases, in turn, often derive the exaggerated claims from the study itself. Part of the problem here is the difference between the results section of a paper and the discussion. The discussion is where the authors are free to speculate about the implications of their research, in addition to strengths and weaknesses, and directions for further research. At times the press office and later journalism focuses on the speculation and not the actual results of the research.

Perhaps the most dramatic example of this is a paper for the chemical society about the chirality of proteins recovered from meteorites. Sounds like a dry paper – which might explain why, in the discussion section of this paper, the authors decided to speculate that such proteins might hint at the possibility of “advanced intelligent dinosaurs on other planets.” Of course the press release picked up on this, stating, “Could ‘advanced’ dinosaurs rule other planets?”

When I discussed this problem with the press offices of several institutions they say that they have a hard time getting a response from scientists, they take too long and then miss the news cycle, so they pretty much have to go it alone. Clearly there is a systemic problem here.

And again, journalists take their share of the blame as well. They should never rely on the press release, but should go to the primary source, and speak to someone who knows what they are talking about. They rarely do, leading to “science by press release”.

Social media has made this problem both better and worse. While it has worsened the spreading of sensational, distorted, and exaggerated science news, and the collapse of traditional news business models has gutted the science journalism infrastructure, it has also allowed for actual scientists to rapidly correct bad science reporting. There is more information and misinformation out there, and the consumer has to find a way to sort through it all.

The entire situation can be vastly improved, however, if scientists themselves take more responsibility for the accurate reporting of their own research.

Shares

Author

Posted by Steven Novella