Democratising AI via Purpose Limitation for Models
AI for People Conference Proceedings - forthcoming
20 Pages Posted: 25 Nov 2023
Date Written: July 19, 2023
Abstract
This paper proposes the concept of purpose limitation for models as an approach to democratising AI via effective regulation. We aim to define the purposes of machine learning models built for predictive analytics and generative AI in democratic processes. Unregulated (secondary) use of specific models creates immense individual and societal risks, including discrimination against individuals or groups, infringement of fundamental rights, or distortion of democracy through misinformation. We argue that possession of trained models, which in many cases consist of anonymous data (even if the training data contains personal data), is at the core of an increasing asymmetry of informational power between data companies and society. Combining ethical and legal aspects in our interdisciplinary approach, we identify the trained model as the object of regulatory intervention instead of the training data. This altered focus adds to existing data protection laws and the proposed Artificial Intelligence Act, which are inefficient in preventing the misuse of trained models due to their focus on the procedural aspects of personal data or training data. By enabling the concept of risk prevention law and the principle of proportionality, we argue that the potential use of trained models in ways that are damaging to society by powerful actors warrants preventive regulatory interventions. Thus, we seek to balance the asymmetry of power by enabling democratic control of where and how predictive and generative AI capabilities may be used by identifying beneficial purposes.
Keywords: AI, regulation, democratising AI, data protection, purpose limitation, GDPR, health data, foundation models, secondary data use, power asymmetries
Suggested Citation: Suggested Citation