The entire scientific enterprise might be viewed as an attempt to systematically weed out bias from our view of reality. At least, this is a critical component of the scientific method. But bias can be subtle, and can creep in at many stages from conception to citation of published research. The subjects of research may have bias, research methods can be biased in terms of how measurements are made or reported, which comparisons are looked at, the statistical methods used, and how data is collected. The publication process also may introduce bias, as can the way in which researchers access, evaluate, and cite published studies.
All of these sources of bias can distort the scientific literature and therefore the academic process and the conclusions that scientists and practitioners come to, with downstream effects in many aspects of how we run our society. But the key strength of science is that it can be (when done properly) self-reflective and self-corrective. This self-corrective process should apply not only to the findings of science, but to the process and institutions of science as well. This is one of the main missions of SBM, to examine and reflect upon the relationship among how science is practiced, the findings of science, and the practice and regulation of medicine.
In that vein we are always on the look out for opportunities to shine a light on new forms of bias within the institutions of science, with an eye toward how such biases can be fixed, or at least disclosed. Scientific journals are a major focus of this attention because they are the ultimate gateway for research to enter the scientific literature – the official body of scientific evidence. Biases in how journals decide what to publish can have a profound impact on the scientific literature, and therefore deserve a lot of attention. We have discussed previously, for example, the effects of predatory journals, the tendency not to publish negative studies or exact replications, and the different challenges of open access vs traditional business models of journals.
Now we can add one more phenomenon to the list of possible journal biases – nepotistic journals.
In a massive review of publications in 5,468 biomedical journals indexed in the National Library of Medicine between 2015 and 2019, Scanff et al. examined the distribution of authors in each journal. They used two measures to quantify this: the Percentage of Papers by the Most Prolific author (PPMP), and the Gini index (level of inequality in the distribution of authorship among authors).
The authors found that the median PPMP for the journals examined was 2.9%, meaning that a typical journal may have around 3% of the papers they publish have the same author. They also used the two standard deviation model, determining the 95% cut-off for PPMP. They found that 95% of journals had a PPMP less than 10.6%, or that 5% had a PPMP of 10.6% or more. A high PPMP also correlated with a high Gini index, which indicates a highly unfair or biased distribution of authors. Further, and perhaps very significantly, they also found a correlation between the specific most prolific authors and a reduced time to publication, such as the percentage of papers that are published within three weeks of submission. This might indicate an expedited or even inadequate review process.
But the most significant correlation they found was that among that 5% of journals with the highest PPMP, in 60% of the cases the most prolific author was a member of the editorial board. This pattern held true when only research papers were considered (therefore not counting letters or editorials that might be disproportionately written by the editors). These were the journals considered “nepotistic”.
Whenever researchers crunch numbers in a massive data set, there is the potential for confounding factors (another form of bias). For example, smaller journals with fewer publications would be more susceptible to having a high PPMP as even a few publications by the same author can be a high percentage. Meanwhile, large journals with tens of thousands of submissions over this time period would necessarily have a low PPMP, because there is only so many papers a single researcher can author. Similarly, a scientific journal with a very narrow niche might have a similarly small community of researchers and very few journals in which to publish.
But even taking these factors into consideration, there seems to be a subset of nepotistic journals that cater to one or more members of their editorial staff, allowing them to publish a large number of papers with little editorial barrier. This can be a method of gaming the system, affecting academic promotion and the awarding of grants. These editorial authors can also use these same papers to deliberately reference other papers in the same (or a sister) journal, therefore gaming the impact factor measure also.
One other concern that this paper could not examine is the scientific quality of nepotistic papers. Given the favorable bias and reduced editorial review, there is concern that low quality science is finding its way into the literature via this method. A smaller study doing qualitative analysis of high PPMP papers would be an interesting follow up. I would also be interested in how this factor relates to journals with a high concern for low quality science, such as those in the alternative community.
The simplest partial fix to this problem (like many things in science) is transparency. Journals might be required to publish alongside their impact factor (a measure of how often they are cited) their PPMP and Gini index. Because prolific journals may mask their nepotistic practices by their high number of publications, publishing the raw number of publications by their most prolific author also helps. Further, the number and/or percentage of papers published in the journal by an author on the editorial staff is critical. At least with this transparency fellow researchers and academics on promotion committees can easily detect blatant nepotistic practices, reducing the benefit of this practice.