Reining in the Results: The Use of Human Indexing and Computer Algorithms in West’s Headnotes & Key Numbers and LexisNexis’s Headnotes & Topics as Tools for Finding Relevant Case Law
34 Pages Posted: 15 Sep 2009
Date Written: September 14, 2009
Since the advent of LexisNexis headnotes and the LexisNexis classification system, the author has wondered about the different ways results are generated in West’s Custom Digest and in LexisNexis’s 'Search by Topic or Headnote' and by KeyCite and Shepard’s. There has been some anecdotal discussion about the differences, but no empirical investigation. This paper starts the investigation process: the author took ten pairs of matching headnotes from legally important federal and California cases and reviewed the cases in the results sets generated by each classification and citator system for relevance. The relevance standards for each case are included. The paper first reviews previous full-text database testing, and the benefits and detriments of both human indexing and algorithmic indexing. Then the two very different systems are tested. Ten pairs of headnotes is too small a sample to say absolutely that results generated by system A are and always will be a certain percentage more or less relevant than system B. However, the differences in the results sets for classification systems and for citator systems do raise some interesting issues about the efficiency and comprehensiveness of any one system, and the need to adjust research strategies accordingly.
Keywords: classification, human indexing, algorithms, search strategy, KeyCite, Shepard's ,West's Digest, Lexis Search by Topic or Headnote
Suggested Citation: Suggested Citation