Updating Purpose Limitation for AI: A normative approach from law and philosophy

18 Pages Posted: 2 Feb 2024

See all articles by Rainer Mühlhoff

Rainer Mühlhoff

University of Osnabrück

Hannah Ruschemeier

FernUniversität in Hagen

Date Written: January 22, 2024

Abstract

This paper addresses a critical regulatory gap in the EU's digital legislation, including the proposed AI Act and the GDPR: the risk of secondary use of trained models and anonymized training datasets. Anonymized training data, such as patients' medical data consented for clinical research, as well as AI models trained from this data, pose the threat of being freely reused in potentially harmful contexts such as insurance risk scoring and automated job applicant screening. To address this, we propose a novel approach to AI regulation, introducing what we term purpose limitation for training and reusing AI models. This approach mandates that those training AI models define the intended purpose (e.g., "medical care") and restrict the use of the model solely to this stated purpose. Additionally, it requires alignment between the intended purpose of the training data collection and the model's purpose.

The production of predictive and generative AI models signifies a new form of power asymmetry. Without public control of the purposes for which existing AI models can be reused in other contexts, this power asymmetry poses significant individual and societal risks in the form of discrimination, unfair treatment, and exploitation of vulnerabilities (e.g., risks of medical conditions being implicitly estimated in job applicant screening). Our proposed purpose limitation for AI models aims to establish accountability, effective oversight, and prevent collective harms related to the regulatory gap.

Originating from an interdisciplinary collaboration between ethics and legal studies, our paper proceeds in four steps, covering (1) the definition of purpose limitation for AI models, (2) examining the ethical reasons supporting purpose limitation for AI models, (3) critiquing the inadequacies of the GDPR, and (4) evaluating the proposed AI Act's shortcomings in addressing the regulatory gap. Through these interconnected stages, we advocate for amending current AI regulation with an updated purpose limitation principle to address one of the most severe regulatory loopholes.

Keywords: AI Act, AI Governance, AI regulation, collective privacy, data ethics, data protection, general purpose AI systems, GDPR, Ethics, LLMSs, EU regulation, secondary data use, power asymmetries, Open Source

Suggested Citation

Mühlhoff, Rainer and Ruschemeier, Hannah, Updating Purpose Limitation for AI: A normative approach from law and philosophy (January 22, 2024). Available at SSRN: https://ssrn.com/abstract=4711621 or http://dx.doi.org/10.2139/ssrn.4711621

Rainer Mühlhoff (Contact Author)

University of Osnabrück ( email )

Germany

Hannah Ruschemeier

FernUniversität in Hagen ( email )

Universitätsstrasse
Hagen, 58084
Germany

HOME PAGE: http://www.fernuni-hagen.de/prof-ruschemeier/en/

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
416
Abstract Views
2,100
Rank
152,221
PlumX Metrics