Shares

Peer-review is one of the pillars of the institutions of science. The idea is simple – have relevant experts review a submission for quality, thoroughness, and errors prior to opening up that submission to the world through publication. This concept, however, is only as good as its execution.

Recently Springer, the publisher of Tumor Biology, retracted 107 papers published in that journal between 2012 and 2016 because of falsified peer-review. This is the largest retraction to date, and clearly represents a systemic problem. This brings the total retractions by Springer for fake reviews to 450. An exploration of what happened here provides some insight into the issue.

How they faked peer-review

The process of faking peer review was not new, and was rather simple. The authors of the submitted papers suggested reviewers, and helpfully included their e-mail addresses. The names of the suggested reviewers were real experts in the field, but the e-mail addresses submitted were fake and lead to a confederate. The confederates submitted glowing reviews of the papers and suggested publication.

The scam was uncovered when the editors of the journal were doing a review of their process. They contacted the reviewers directly, and found that they had never reviewed the papers in question and that they did not own the listed e-mail address.

The fix for this specific scam is simple – when a reviewer is suggested, don’t trust the submitted contact information. Verify them independently, and contact the reviewer directly. However, this will only address this one method, not the deeper problem of researchers trying to game the system. Obviously you need to plug holes when they are discovered, but that is not enough. The process needs to be tight enough to foil the next scam, not just the last one.

This does also raise the question of allowing researchers to suggest their own reviewers. There is a range of opinions about this. Some journal editors have stated that they allow submissions to suggest reviewers, and then avoid those reviewers. Others don’t allow suggestions, and some will use suggested reviewers and those the editors find themselves.

The obvious reason to avoid suggested reviewers is that they are hand-picked to be favorable to the research or the researchers. In the worst case, friends could trade favorable reviews of each-other’s research.

In the best case, and the intended purpose of suggesting reviewers, is that experts in a field will know who the other experts are, perhaps better than the journal editors. It is meant to be a good-faith helpful suggestion. Also, some argue that a new or disruptive idea in science might be unfairly excluded by the clique of established experts. An open-minded expert might be needed to give the paper a fair shake.

Probably the best compromise is to use both suggested and independent expert reviewers, and then make a judgement about the actual quality and importance of the research taking all opinions into account. No one said being the editor of a major science journal would be easy.

The culture of science

Another important aspect of this spasm of 107 retractions is that all of the researchers were from Chinese institutions, and all were involved in cancer research. Why I think this is important is because it shows the importance of culture in science.

While I think we need a robust system within the institutions of science to ensure quality, honesty, transparency, and ethical behavior, these values also need to be shared by the scientific culture. I don’t think there is any system that can adequately compensate for a culture of scientists who are inherently dishonest.

A specific scientific culture exists at every level – it is unique to each lab, institution, field of inquiry, and nation. Norms of behavior are passed down from mentor to student, and are maintained by the community. We see this in other institutions as well, such as governments, where corruption breeds corruption. If one research group is getting a leg up by faking peer review, others will feel the pressure to do the same in order to compete, but will also feel as if the behavior is accepted.

It seems unlikely that all these Chinese cancer researchers came up with the same exact scam during the same four year period. Clearly this was an idea that was passed around, and the behavior became accepted, and perhaps even expected.

There is reason to believe that this is also not an isolated phenomenon, but a problem more generally with scientific research in China. For example, reviews have found that while about 60% of acupuncture studies from Europe and other parts of the world are positive, acupuncture studies from China are 99.8 to 100% positive (statistically unlikely even if acupuncture works, and it probably doesn’t). Part of this is cultural, and part is likely due to the recent surge in science funding in China and pressure to become an overnight scientific powerhouse. It takes time to develop mature institutions, and if you throw money at immature institutions you tend to get corruption and waste.

All of this is not to suggest that scientific fraud is unique to China. It unfortunately is not. Rather, the point is that fraud and corruption tend to occur in pockets where it is allowed for foment and spread.

Fixing the deeper problem

In order to have science-based medicine, we need science, and we need science that has all the features I named above – high quality, honesty, transparency, and ethical behavior. Fraud and poor quality in science are a massive drain on limited resources. In medicine the problem is magnified because practitioners are making day-to-day decisions about how to treat patients based on that science.

Therefore we all have a huge incentive to have robust systems of maintaining high quality and minimizing fraud. Further, we need to foster a culture where fraud is unacceptable, and honesty is expected. This does mean, in my opinion, that the cost of being caught committing scientific fraud has to be extremely high, not only for the individuals but for the involved institutions.

Journal editors have a lot of power, and responsibility, in this regard. It would be defensible, for example, to require a much higher level of scrutiny for submissions from countries or institutions that have a history of prior fraud. China, in my opinion, should be in scientific probation. Submissions from Chinese institutions should be looked upon with greater suspicion, and require greater vetting. Further, systematic reviews should consider the origin of the studies they review, and not assume a level playing field for all research.

For example, the evidence is clear that acupuncture research from China has a near 100% positive publication bias, and likely a massive positive researcher bias. And yet acupuncture papers from China contaminate all systematic reviews of acupuncture, without acknowledgement of this clearly established fact. It is reasonable to consider the reliability of the source of studies when reviewing the evidence. If one lab was the only lab that could produce positive studies, for example, that would certainly be noted and call their results into question.

I often feel the need to point out when I write such articles that I don’t think science is broken. The institutions of science are fairly robust, and true results do tend to persist over time, while false results tend to fade. Rather, this is about making the institutions of science even better, and minimizing waste of resources. Fraud creates nothing but waste and causes harm. Everyone agrees (at least publicly) that it is unacceptable. The real question is about priority. In my opinion it needs to have an extremely high priority.

Shares

Author

  • Founder and currently Executive Editor of Science-Based Medicine Steven Novella, MD is an academic clinical neurologist at the Yale University School of Medicine. He is also the host and producer of the popular weekly science podcast, The Skeptics’ Guide to the Universe, and the author of the NeuroLogicaBlog, a daily blog that covers news and issues in neuroscience, but also general science, scientific skepticism, philosophy of science, critical thinking, and the intersection of science with the media and society. Dr. Novella also has produced two courses with The Great Courses, and published a book on critical thinking - also called The Skeptics Guide to the Universe.

Posted by Steven Novella

Founder and currently Executive Editor of Science-Based Medicine Steven Novella, MD is an academic clinical neurologist at the Yale University School of Medicine. He is also the host and producer of the popular weekly science podcast, The Skeptics’ Guide to the Universe, and the author of the NeuroLogicaBlog, a daily blog that covers news and issues in neuroscience, but also general science, scientific skepticism, philosophy of science, critical thinking, and the intersection of science with the media and society. Dr. Novella also has produced two courses with The Great Courses, and published a book on critical thinking - also called The Skeptics Guide to the Universe.