The Problematic Value of Mathematical Models of Evidence
34 Pages Posted: 16 May 2006 Last revised: 26 Feb 2015
Legal scholarship exploring the nature of evidence and the process of juridical proof has had a complex relationship with formal modeling. As evident in so many fields of knowledge, algorithmic approaches to evidence have the theoretical potential to increase the accuracy of fact finding, a tremendously important goal of the legal system. The hope that knowledge could be formalized within the evidentiary realm generated a spate of articles attempting to put probability theory to this purpose. This literature was both insightful and frustrating. Much light was shed on the legal system, but it also quickly became evident that the tools of probability theory were in many ways ill-constructed for the task. Fundamental incompatibilities between the structure of legal decision making and the extant formal tools were identified, and it became evident that many of the purported explanations of legal phenomena were internally inconsistent. As a consequence, interest in this type of formal modeling declined, and attention was directed toward different kinds of explanations of the phenomena.
Perhaps under the influence of a recent trend toward various types of formal modeling in legal scholarship, a recent burst of articles, rather than attempting to explain the macro structure of trials, which was the previous object of interest, attempts to quantify the probative value of various items of evidence in ways consistent with the formal features of various probability theories, and then to study decision making from that perspective. For example, the value of evidence is often purported to be its likelihood ratio, that is, the probability of discovering or receiving the evidence given a hypothesis (e.g., the defendant did it) divided by the probability of discovering or receiving the evidence given the negation of the hypothesis (the defendant didn't do it). Alternatively, the value of evidence is purported (more contextually) to be the information gain it provides, defined as the increase in probability it provides for a hypothesis above the probability of the hypothesis based on the other available evidence. Both conceptions then assume that all of the various probability assessments conform or ought to conform to the dictates of Bayes' theorem (that maintains consistency among such assessments); empirical studies are then done testing the extent to which this is so and proposing how the law can increase the probability that it is so.
The general criticisms of using Bayes' theorem as a formal model of juridical proof are well known and were integral to the last wave of interest in formal modeling of the evidentiary process. This paper thus for the most part puts aside that more general issue, and focuses specifically on mathematical modeling of the value of particular items of evidence. The paper demonstrates that formal modeling has only limited value in explaining the value of legal evidence, much more limited than those constructing and discussing the models assume, and thus that the conclusions they draw about the value of evidence are unwarranted. This is done through a discussion of four recent examples that attempt to quantify evidence relating to, respectively, carpet fibers, infidelity, DNA random-match evidence, and character evidence used to impeach a witness. This article thus makes two contributions. First, and most importantly, it is another demonstration of the complex relationship between algorithmic tools and legal decision making. Second, at a minimum it points out serious pitfalls for analytical or empirical studies of juridical proof.
Keywords: Evidence, Statistical Evidence, Probative Value
Suggested Citation: Suggested Citation