Detecting Deception: Adversarial Problem Solving in a Low Base-Rate World
Cognitive Science, Vol. 25, No. 3. (June 2001), pp. 355-392, doi:10.1016/s0364-0213(01)00040-4
Posted: 1 Jul 2013
Date Written: June 1, 2000
The work presented here investigates the process by which one group of individuals solves the problem of detecting previous termdeceptionsnext term created by other agents. A field experiment was conducted in which twenty-four auditors (partners in international public accounting firms) were asked to review four cases describing real companies that, unknown to the auditors, had perpetrated financial frauds. While many of the auditors failed to detect the manipulations in the cases, a small number of auditors were consistently successful. Since the detection of frauds occurs infrequently in the work of a given auditor, we explain success by the application of powerful heuristics gained from experience with previous termdeceptionsnext term in everyday life. These heuristics implement a variation of Dennett's intentional stance strategy, which is based on interpreting detected inconsistencies in the light of the Deceiver's (i.e., management's) goals and possible actions. We explain failure to detect previous termdeceptionnext term by means of perturbations (bugs) in the domain knowledge of accounting needed to apply these heuristics to the specific context of financial statement fraud. We test our theory by showing that a computational model of fraud detection that employs the proposed heuristics successfully detects frauds in the cases given to the auditors. We then modify the model by introducing perturbations based on the errors made by each of the auditors in the four cases. The resulting models account for 84 of the 96 observations (i.e., 24 auditors x four cases) in our data.
Suggested Citation: Suggested Citation