Frequentist Methods for Statistical Inference
Frequentist Methods for Statistical Inference, in Handbook of Forensic Statistics (David L. Banks, Karen Kafadar, David H. Kaye, and Maria Tackett eds., CRC Press: Boca Raton, Florida
43 Pages Posted: 21 Apr 2020 Last revised: 27 Apr 2020
Date Written: January 20, 2020
Statistical inference can be described as the process of drawing conclusions about a population or process based on sample data. This chapter outlines the logic of “classical” or “frequentist” methods for such inference. Three commonly used concepts for assessing statistical error are confidence intervals, p-values, and hypothesis tests. The chapter explains the reasoning behind these devices without focusing unduly on the computational steps. It also outlines the logic underlying resampling methods. It identifies common misinterpretations of computed quantities and discusses some of the comparative advantages and disadvantages of using confidence intervals, p-values, classical hypothesis tests, and likelihood ratios for various purposes in forensic science. Along with idealized, simple examples of probabilistic processes, it uses two principal examples from forensic science to illustrate the frequentist reasoning. The first involves an experiment to ascertain the validity and false positive probability of identifications made by latent fingerprint examiners. The second involves measurements of the refractive index of glass fragments.
Keywords: statistics, probability, forensic science, inference, likelihood ratios, hypothesis tests, confidence intervals, resampling, bootstrap, permutation tests, sampling
JEL Classification: C12, C13, C15, C18, C19
Suggested Citation: Suggested Citation