Oh Yes, I Remember it Well: Why the Inherent Unreliability of Human Memory Makes Neuroimaging Technology a Poor Measure of Truth-Telling in the Courtroom

99 Pages Posted: 19 Apr 2011 Last revised: 11 Feb 2015

See all articles by Jennifer Bard

Jennifer Bard

University of Cincinnati - College of Law

Date Written: March 13, 2012


We all know that human memory is unreliable. But we consider less often is how difficult it is evaluate our own memories for accuracy. The song referenced in the title of this article concerns a conversation about two lovers describing their first meeting - in all good faith both remember it quite differently. Yet the quest to know the thoughts of others as well as identify when we are being deliberately deceived has encouraged the application of newly developed technologies to the task of reading minds. From witch dunking to phrenology to polygraphs, science’s promise to access thought has been met with first great enthusiasm and then even greater disappointment. Today, companies like NoLie MRI are advertising the latest version of this promise in the form of software which, it claims, can use brain imaging technology to “by-pass” conscious thought and identify deliberate deception.

This article takes a new approach to considering the prospect of mind reading technology in that it reviews the claims made by those selling access to thoughts in light of the current cognitive understanding about human memory which requires us to retire the heuristic of the brain as a camera. It then links the current understanding of memory with the strong criticisms made by the Innocence Project and others seeking to overturn wrongful convictions about the misuse and over-reliance on eye-witness testimony which also is based on a misunderstanding of the inherent unreliability of memory. It argues that both information from neuroimaging and direct eye-witness testimony must meet rigorous standards for admitting forensic scientific evidence before being offered to juries to assist in fact-finding.

Although much has been written about neuroimaging as a method of truth detection, this article takes a new approach by identifying the information which comes from neuroimaging as “memory” and then by analyzing it in the context of contemporary cognitive science. It also addresses the tenacity of the claim that there is such a thing as direct access to past events whether through eye-witness testimony or neuroimaging. Any law student who has taken Evidence has read about, or better experienced, an experiment in which a man bursts into a crowded classroom, runs through shouting and then leaves. When questioned directly after the event there is strong disagreement among the witnesses as to what the man was saying, what he was wearing and whether or not he had a gun. Based on the work of psychologist Elizabeth Loftus, now on the faculty of the University of California at Irvine Law School, this experience, more than any dry article about cognitive science, demonstrates the inherent unreliability of human memory and the conviction of eye-witnesses about what they have seen. Lawyers involved in the Innocence Project which is seeking to challenge wrongful convictions based on eye-witness testimony by examining conflicting DNA evidence have further brought these findings to public attention.

Yet despite what has become common knowledge about the malleability of human memory, the idea that it’s possible to access the brain directly to find out whether a witness is telling the truth is being put forward by companies which seek to profit from research that suggests that new imaging technology can detect when a human is telling a lie. These companies are advertising this technology as a tool for law enforcement and promoting its use in U.S. trials as a way of helping juries to assess the credibility of witnesses. This article explores these claims that neuroimaging scans can be used to detect lies, which far exceed those made by responsible scientists, and also puts them in the context of a series of U.S. Supreme Court cases which have dramatically changed how scientific (forensic) evidence can be presented to the jury in criminal trials.

In this article I argue that promises of lie detection are not only based on false premises, but they are harmful to the integrity of the legal system because they seek to substitute a technology, which is not just undeveloped and inadequately tested but inherently flawed, for the judgment of the fact-finder, judge or jury, in a criminal trial. I conclude that even if there was neuroimaging technology which could provide direct access to human thought, the result would share the inaccuracies and subjectivity that we already know is an inherent feature of human memory. Moreover, because this technology promises to do something that jurors know they cannot - determine when a person is lying - there is a substantial risk that it will prejudice defendants because jurors will substitute the results of the technology for their own collective judgment.

Keywords: neuroethics, fMRI, neuroimaging, psychiatry, criminal, procedure, eye-witness, innocence project,CSI, Forensics, Daubert, Fifth Amendment, Sixth Amendment, Privacy, Disability, Terrorism, Investigation, Psychic, Technology, philosophy, ethics, construct, mind, brain, constitution

Suggested Citation

Bard, Jennifer S., Oh Yes, I Remember it Well: Why the Inherent Unreliability of Human Memory Makes Neuroimaging Technology a Poor Measure of Truth-Telling in the Courtroom (March 13, 2012). Available at SSRN: https://ssrn.com/abstract=1813425 or http://dx.doi.org/10.2139/ssrn.1813425

Jennifer S. Bard (Contact Author)

University of Cincinnati - College of Law

P.O. Box 210040
Cincinnati, OH 45221-0040
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Abstract Views
PlumX Metrics