When Non-Consensual Intimate Deepfakes Go Viral: The Insufficiency of the UK Online Safety Act
33 Pages Posted: 22 Apr 2024 Last revised: 30 Aug 2024
Date Written: April 12, 2024
Abstract
Advancements in artificial intelligence (AI) have drastically simplified the creation of synthetic media. While concerns often focus on potential misinformation harms, 'non-consensual intimate deepfakes' (NCID) - a form of image-based sexual abuse - pose a current, severe, and growing threat, disproportionately impacting women and girls. This paper examines the measures implemented with the recently adopted UK Online Safety Act 2023 (OSA) and argues that the new criminal offences and the 'systems and processes' approach the law adopts are insufficient to counter NCID in the UK. This is because they rely on platform policies that often lack consistency regarding synthetic media and on platforms' content removal mechanisms, which offer limited redress to victim-survivors after the harm has already occurred. The paper argues that stronger prevention mechanisms are necessary and proposes that the law should mandate all AI-powered deepfake creation tools to ban the generation of intimate synthetic content and require the implementation of comprehensive and enforceable moderation systems.
Keywords: deepfakes, synthetic media, image-based sexual abuse, Online Safety Act, content moderation, AI regulation
Suggested Citation: Suggested Citation