Audio Deepfakes and the Regulation of the Landlords of Creativity
Cambridge Forum on AI: Law and Governance (forthcoming 2025)
37 Pages Posted: 3 Feb 2025 Last revised: 8 May 2025
Date Written: April 18, 2025
Abstract
This paper examines the pivotal role of audio deepfakes in undermining trust and suggests placing liabilities on organizations responsible for building generative AI (“genAI”) models as a way of dealing with malicious usages of said deepfakes. While visual deepfakes have undeniably wreaked havoc in various contexts, such as the notorious deepfake video featuring the Ukrainian President imploring his troops to surrender during the Russian invasion, audio deepfakes present a distinct and potentially more pernicious threat. Unlike their visual counterparts, audio deepfakes lack discernible cues, making expert intervention essential for determining the audio’s authenticity. Consequently, the misuse of audio deepfakes in propaganda campaigns carries the potential for far graver consequences than their visual equivalents, as it can insidiously erode trust, foment discord, and mislead the public long before detection is possible.
To address this regulatory gap, this paper proposes holding the companies that build genAI models accountable for certain malicious uses of said models. These “landlords of creativity” are currently the only actors that could provide the necessary computational resources needed to train genAI models used in such deepfakes. Instead of selling the models directly to consumers, these landlords often “lease” them out as a service through their Application Programming Interfaces (APIs), allowing the consumers to fine-tune prebuilt models and use said models through API calls.
Because genAI models reside within these landlords’ datacenters, it is appropriate to place some liabilities on these landlords of creativity. In the simplest case, a landlord is directly responsible for training a model that primarily has malicious use. In that circumstance, the landlord should be fully responsible for said use. The majority of the cases, however, will fall into the category where there are non-malicious uses for a genAI model. In these instances, this paper suggests that the landlord can avoid liability if they (1) limit the capacities provided to unidentified customers and/or (2) attribute and identify data sources for generated content, reporting generated content to a central regulatory authority.
This paper begins by examining the genAI models powering audio deepfakes. It then surveys how regulatory regimes such as the Biden’s Blueprint for an AI Bill of Rights, the EU AI Act, and the Chinese Algorithm Recommendation for Internet Information Services deal with misuses of genAI. Finally, the paper concludes by advocating for holding the landlords of creativity responsible for their renters’ malicious usages.
Keywords: Generative AI Regulation, AI Regulation, Comparative AI Regulation, Landlords of Creativity, AI Liability
Suggested Citation: Suggested Citation