Skip to main content

Review of "You do not receive enough recognition for your influential science"

Published onNov 04, 2023
Review of "You do not receive enough recognition for your influential science"
key-enterThis Pub is a Review of
You do not receive enough recognition for your influential science
You do not receive enough recognition for your influential science
Description

Abstract During career advancement and funding allocation decisions in biomedicine, reviewers have traditionally depended on journal-level measures of scientific influence like the impact factor. Prestigious journals are thought to pursue a reputation of exclusivity by rejecting large quantities of papers, many of which may be meritorious. It is possible that this process could create a system whereby some influential articles are prospectively identified and recognized by journal brands but most influential articles are overlooked. Here, we measure the degree to which journal prestige hierarchies capture or overlook influential science. We quantify the fraction of scientists’ articles that would receive recognition because (a) they are published in journals above a chosen impact factor threshold, or (b) are at least as well-cited as articles appearing in such journals. We find that the number of papers cited at least as well as those appearing in high-impact factor journals vastly exceeds the number of papers published in such venues. At the investigator level, this phenomenon extends across gender, racial, and career stage groupings of scientists. We also find that approximately half of researchers never publish in a venue with an impact factor above 15, which under journal-level evaluation regimes may exclude them from consideration for opportunities. Many of these researchers publish equally influential work, however, raising the possibility that the traditionally chosen journal-level measures that are routinely considered under decision-making norms, policy, or law, may recognize as little as 10-20% of the work that warrants recognition.

As a signatory of Publish Your Reviews, I have committed to publish my peer reviews alongside the preprint version of an article. For more information, see http://publishyourreviews.org.

This paper compares the assessment of researchers based on journal-level citation metrics (impact factor) to researcher assessment based on article-level citation metrics (RCR). The authors conclude that “the majority of researchers would receive more consideration using article-level compared to journal-level metrics”.

Below I offer a number of comments on the paper.

“One sensibility that often resonates with members of the scientific community that if we recognize papers during personnel advancement based in large part on the citation rate of their venue, we should also recognize papers cited equally well even if appearing in less prestigious venues”: The authors are right that arguments like this one are often made. However, it is important to acknowledge that there is disagreement about these arguments. See for instance my work on this topic: https://doi.org/10.12688/f1000research.23418.2.

The authors use the RCR metric. In my view this metric has significant problems: https://www.cwts.nl/blog?article=n-q2u294. I believe it would be better to use straightforward citation counts instead of RCR. Impact factors are also based on straightforward citation counts, so this would result in a more meaningful comparison between journal-level and article-level metrics.

“The fraction of scientists that have a higher number of Citation Elite papers vs. Journal Elite papers is nearly an order of magnitude higher (Figure 2). This suggests a substantial improvement to recognition for a large segment of the biomedical research workforce by including article-level indicators as a way of recognizing research.”: I find this conclusion problematic. The authors define citation elite papers in such a way that there are many more citation elite papers than journal elite papers. It is to be expected that this then results in most researchers having more citation elite papers than journal elite papers. This is not an empirical finding but a consequence of the way the authors define citation elite papers. In the approach taken by the authors, “articles with an RCR higher than the median of those published in journals above the impact factor thresholds were labeled as article-level ‘Citation Elites’”. My suggestion is to take a different approach, namely to choose the RCR threshold in such a way that the number of citation elite papers equals the number of journal elite papers. In my view this is necessary for a meaningful comparison between citation elite papers and journal elite papers.

“Reexamination under strict zero-sum conditions”: I read this section multiple times, but I am afraid I don’t understand it. The explanation of the zero-sum framework needs improvement and further elaboration.

Competing interest: I am currently working together with one of the authors (Chaoqun Ni) in a joint research project.

Comments
0
comment
No comments here
Why not start the discussion?