Deepfakes in Court: How Judges Can Proactively Manage Alleged AI-Generated Material in National Security Cases
52 Pages Posted: 10 Sep 2024
Date Written: August 08, 2024
Abstract
Dall-E. ChatGPT. GPT-4. Words that did not exist in the English lexicon just a few years ago are now commonplace. With the widespread availability of Artificial Intelligence (AI) tools, specifically Generative AI, whether in the context of text, audio, video, imagery, or even combinations of these, it is inevitable that trials related to national security will involve evidentiary issues raised by Generative AI. We must confront two possibilities: first, that evidence presented is AI generated and not real and, second, that other evidence is genuine but alleged to be fabricated. Technologies designed to detect AI generated content have proven to be unreliable, and also biased. Humans have also proven to be poor judges of whether a digital artifact is real or fake. There is no foolproof way today to classify text, audio, video, or images as authentic or AI generated, especially as adversaries continually evolve their deepfake generation methodology to evade detection. Thus, the generation and detection of fake evidence will continue to be a cat and mouse game. These are not challenges of a far-off future, they are already here. Judges will increasingly need to establish best practices to deal with a potential deluge of evidentiary issues.
We will discuss the evidentiary challenges posed by Generative AI using a civil lawsuit hypothetical. The hypothetical describes a scenario involving a U.S. presidential candidate seeking an injunction against her opponent for circulating disinformation in the weeks leading up to the election. We address the risk that fabricated evidence might be treated as genuine and genuine evidence as fake. Through this scenario, we discuss the best practices that judges should follow to raise and resolve Generative AI issues under the Federal Rules of Evidence.
We will then provide a step-by-step approach for judges to follow when they grapple with the prospect of alleged AI-generated fake evidence. Under this approach, judges should go beyond a showing that the evidence is merely more likely than not what it purports to be. Instead, they must balance the risks of negative consequences that could occur if the evidence turns out to be fake. Our suggested approach ensures that courts schedule a pretrial evidentiary hearing far in advance of trial, where both proponents and opponents can make arguments on the admissibility of the evidence in question. In its ruling, the judge should only admit evidence, allowing the jury to decide its disputed authenticity, after considering under Rule 403 whether its probative value is substantially outweighed by danger of unfair prejudice to the party against whom the evidence will be used. Our suggested approach thus illustrates how judges can protect the integrity of jury deliberations in a manner that is consistent with the current Federal Rules of Evidence and relevant case law.
Keywords: deepfake, evidence, authentic, prejudicial, probative, jury, juror, judge, Artificial Intelligence, Federal Rules of Evidence, Federal Rules of Civil Procedure, Generative AI, Generative Artificial Intelligence, authentication of AI, deepfake detection, evidence, national security, election integrity, misinformation, disinformation, deepfakes, deep fake, deep fakes, watermark, Federal Rule of Evidence 901, Federal Rule of Evidence 403, AI-generated material, AI-generated evidence
Suggested Citation: Suggested Citation