Narrative Abduction
125 Pages Posted: 22 May 2024
Date Written: December 2022
Abstract
This paper is the final report for a 2022 research project "Bolstering the Effectiveness of HMTs through Support for Narrative Abduction" by the Hunt Laboratory for Intelligence Research at the University of Melbourne with support and collaboration from the Laboratory for Analytic Sciences at North Carolina State University.
Executive Summary: Intelligence analysts often develop narratives to help them understand complex situations. A crucial part of this process is assessing the plausibility of these explanatory narratives. “Narrative abduction” is the technical term for the reasoning leading to plausibility assessments. Narrative abduction is difficult, and at least some intelligence failures are, at their heart, failures of narrative abduction.
Despite its pervasiveness and its difficulty, the question of how to do narrative abduction has had little research attention, and has been poorly understood. One major question is whether some form of human-machine teaming will be able to improve narrative abduction in quality or efficiency. To help address this, the current project had two specific objectives: (1) develop a new analytic method to guide analysts in the complex process of narrative abduction, and (2) develop a working “app” to support human analyst teams in using the method. This might set the stage for automation of some aspects of the reasoning process.
We pursued these objectives using a “design science research” (DSR) approach, which focuses on the development of some artefact and proceeds via rapid, iterated design-build-evaluate cycles. The artefact, in our case, was what we called a framework, consisting primarily of an analytic method and a corresponding app, with other resources such as guidance materials. Our starting point was the observation that in many fields, most notably in science, competing theories are evaluated in terms of the extent to which they exhibit a range of virtues such as explanatory power. We conjectured that competing explanatory narratives in an intelligence context should also be evaluated in terms of a set of virtues which we called plausibility dimensions. Project lead Tim van Gelder (TvG) defined a novel “Analysis of Competing Narratives” method which required analysts to rate competing narratives on three distinct dimensions. Using a commercial no/low code platform, he developed an app to support teams of analysts in applying the method. Insights gathered over two main design-build-generate cycles suggested that while the framework had some strengths, the core activity – rating competing narratives on plausibility dimensions – was problematic, and some of us were not optimistic that continued cycles would resolve the difficulties.
We pivoted to an alternative approach, whose starting point was the assumption that an effective framework would be one which enhanced the natural expertise of analysts. Building on prior research in the cognitive science of intelligence analysis, TvG developed a second analytic method, the “Evaluation of Explanatory Narratives.” In this second method, the plausibility dimensions from the ACN approach survived as “lenses” through which analysts could probe competing narratives for critical weaknesses. To support teams using the method we rapidly prototyped another app, again doing so on top of a commercial platform. The one evaluation exercise we were able to conduct using the second framework again yielded mixed results, though this is to be expected so early in the DSR process.
Keywords: abduction, narrative, story, explanation, intelligence analysis, reasoning
Suggested Citation: Suggested Citation