26 Pages Posted: 23 Aug 2011 Last revised: 17 Jul 2012
Date Written: July 13, 2012
Traditional pre-publication peer review of scientific output is a slow, inefficient, and unreliable process. Efforts to replace or supplement traditional evaluation models with open evaluation platforms that leverage advances in information technology are slowly gaining traction, but remain in the early stages of design and implementation. Here I discuss a number of considerations relevant to the development of such platforms. I focus particular attention on three core elements that next-generation evaluation platforms should strive to emphasize, including (a) open and transparent access to accumulated evaluation data, (b) personalized and highly customizable performance metrics, and (c) appropriate short-term incentivization of the userbase. Because all of these elements have already been successfully implemented on a large scale in hundreds of existing social web applications, I argue that development of new scientific evaluation platforms should proceed largely by adapting existing techniques rather than engineering entirely new evaluation mechanisms. Successful implementation of open evaluation platforms has the potential to substantially advance both the pace and the quality of scientific publication and evaluation, and the scientific community has a vested interest in shifting towards such models as soon as possible.
Keywords: peer review, evaluation, publishing, open access
Suggested Citation: Suggested Citation
Yarkoni, Tal, Designing Next-Generation Platforms for Evaluating Scientific Output: What Scientists Can Learn from the Social Web (July 13, 2012). Available at SSRN: https://ssrn.com/abstract=1915313 or http://dx.doi.org/10.2139/ssrn.1915313