The futility of a well-ordered publication ranking
23 Pages Posted: 26 Jun 2024
Abstract
This paper investigates the impact that journal-based metrics can have on the research assessment of individual scholars. Based on data from 16,743 authors of 323,913 publications in the broad field of business and management, we observe that journal-based performance assessments of scholars may significantly differ between prominent journal rating lists. By conducting pairwise-comparison between these journal rating lists, we observe that in each comparison there are scholars who are perceived to be top performers according to one journal rating list but low performers according to another. We identify subject areas of researchers experiencing such differences. Moreover, we observe a national self-assessment bias by comparing research performance of scholars with respect to their "own" national journal rating list and those of other countries, as well as an individual self-assessment bias that would be possible if scholars were given the possibility to self-select metric criteria among a variety of reasonable choices. We observe that many researchers would be able to tune parameters in such a way that they would be perceived to outperform others. Based on our empirical insights we discuss several implications for researchers, research organisations, and funding bodies.
Keywords: Incentives in science, Journal rankings, Journal quality lists, Performance metrics
Suggested Citation: Suggested Citation