Measuring Science: An Exploration
34 Pages Posted: 6 May 1998 Last revised: 21 Jun 2021
Date Written: March 1996
This paper examines available U.S. data on academic R&D expenditures and the number of papers published and the number of citations to these papers as possible measures of `output' of this enterprise. We look at these numbers for science and engineering as a whole, for 5 selected major fields, and at the individual university-field level. The published data in Science and Engineering Indicators imply sharply diminishing returns to academic R&D using published papers as a 'output' measure. These data are problematic. Using a newer set of data on papers and citations, based on an `expanding' set changes the picture drastically, eliminating seemingly diminishing returns but raising the question of why input prices of academic R&D are rising so much faster than either the GDP deflator or the implicit R&D deflator in industry. A production function analysis of such data indicates significant diminishing returns to `own' R&D, with the R&D coefficients hovering around 0.5 for estimates with paper numbers as the dependent variable and around 0.6 if total citations are used. When we substitute scientists and engineers in place of R&D as the right hand side variables, the coefficient on papers rises from 0.5 to 0.8, and the coefficient on citations rises from 0.6 to 0.9, indicating systematic measurement problems with R&D as the sole input into the production of scientific output. But allowing for individual university-field effects drives these numbers down below unity. Since in the aggregate both paper numbers and citations are growing as fast or faster than R&D, this can be seen as leaving a major, yet unmeasured role, for the contribution of spill- overs from other fields, other universities, and other countries.
Suggested Citation: Suggested Citation