Bias Against Novelty in Science: A Cautionary Tale for Users of Bibliometric Indicators
51 Pages Posted: 25 Apr 2016 Last revised: 8 Feb 2023
There are 4 versions of this paper
Bias Against Novelty in Science: A Cautionary Tale for Users of Bibliometric Indicators
Bias against Novelty in Science: A Cautionary Tale for Users of Bibliometric Indicators
Bias Against Novelty in Science: A Cautionary Tale for Users of Bibliometric Indicators
Bias Against Novelty in Science: A Cautionary Tale for Users of Bibliometric Indicators
Date Written: April 2016
Abstract
Research which explores unchartered waters has a high potential for major impact but also carries a higher uncertainty of having impact. Such explorative research is often described as taking a novel approach. This study examines the complex relationship between pursuing a novel approach and impact. Viewing scientific research as a combinatorial process, we measure novelty in science by examining whether a published paper makes first time ever combinations of referenced journals, taking into account the difficulty of making such combinations. We apply this newly developed measure of novelty to all Web of Science research articles published in 2001 across all scientific disciplines. We find that highly novel papers, defined to be those that make more (distant) new combinations, deliver high gains to science: they are more likely to be a top 1% highly cited paper in the long run, to inspire follow on highly cited research, and to be cited in a broader set of disciplines. At the same time, novel research is also more risky, reflected by a higher variance in its citation performance. In addition, we find that novel research is significantly more highly cited in “foreign” fields but not in its “home” field. We also find strong evidence of delayed recognition of novel papers and that novel papers are less likely to be top cited when using a short time window. Finally, novel papers typically are published in journals with a lower than expected Impact Factor. These findings suggest that science policy, in particular funding decisions which rely on traditional bibliometric indicators based on short-term direct citation counts and Journal Impact Factors, may be biased against “high risk/high gain” novel research. The findings also caution against a mono-disciplinary approach in peer review to assess the true value of novel research.
Suggested Citation: Suggested Citation