Lessons from GDPR for AI Policymaking

19 Pages Posted: 2 Aug 2023 Last revised: 19 Sep 2023

See all articles by Josephine Wolff

Josephine Wolff

The Fletcher School of Law and Diplomacy, Tufts University

William Lehr

Massachusetts Institute of Technology (MIT) - Computer Science and Artificial Intelligence Laboratory (CSAIL)

Christopher S. Yoo

University of Pennsylvania Carey Law School; University of Pennsylvania - Annenberg School for Communication; University of Pennsylvania - School of Engineering and Applied Science

Date Written: August 1, 2023

Abstract

The ChatGPT chatbot has not just caught the public imagination; it is also amplifying concern across industry, academia, and government policymakers interested in the regulation of Artificial Intelligence (AI) about how to understand the risks and threats associated with AI applications. Following the release of ChatGPT, some EU regulators proposed changes to the EU AI Act to classify AI systems like ChatGPT that generate complex texts without any human oversight as “high-risk” AI systems that would fall under the law’s requirements. That classification was a controversial one, with other regulators arguing that technologies like ChatGPT, which merely generate text, are “not risky at all.” This controversy risks disrupting coherent discussion and progress toward formulating sound AI regulations for Large Language Models (LLMs), AI, or ICTs more generally. It remains unclear where ChatGPT fits within AI and where AI fits within the larger context of digital policy and the regulation of ICTs in spite of nascent efforts by OECD.AI and the EU.

This paper aims to address two research questions around AI policy: (1) How are LLMs like ChatGPT shifting the policy discussions around AI regulations? (2) What lessons can regulators learn from the EU’s General Data Protection Regulation (GDPR) and other data protection policymaking efforts that can be applied to AI policymaking?

The first part of the paper addresses the question of how ChatGPT and other LLMs have changed the policy discourse in the EU and other regions around regulating AI and what the broader implications for these shifts may be for AI regulation more widely. This section reviews the existing proposal for an EU AI Act and its accompanying classification of high-risk AI systems, considers the changes prompted by the release of ChatGPT and examines how LLMs appear to have altered policymakers’ conceptions of the risks presented by AI. Finally, we present a framework for understanding how the security and safety risks posed by LLMs fit within the larger context of risks presented by AI and current efforts to formulate a regulatory framework for AI.

The second part of the paper considers the similarities and differences between the proposed AI Act and GDPR in terms of (1) organizations being regulated, or scope, (2) reliance on organizations’ self-assessment of potential risks, or degree of self-regulation, (3) penalties, and (4) technical knowledge required for effective enforcement, or complexity. For each of these areas, we consider how regulators scoped or implemented GDPR to make it manageable, enforceable, meaningful, and consistent across a wide range of organizations handling many different kinds of data as well as the extent to which they were successful in doing so. We then examine different ways in which those same approaches may or may not be applicable to the AI Act and the ways in which AI may prove more difficult to regulate than issues of data protection and privacy covered by GDPR. We also look at the ways in which AI may make it more difficult to enforce and comply with GDPR since the continued evolution of AI technologies may create cybersecurity tools and threats that will impact the efficacy of GDPR and privacy policies. This section argues that the extent to which the proposed AI Act relies on self-regulation and the technical complexity of enforcement are likely to pose significant challenges to enforcement based on the implementation of the most technologically and self-regulation-focused elements of GDPR.

Keywords: GDPR, EU AI Act, AI policy, tech regulation, ChatGPT

Suggested Citation

Wolff, Josephine and Lehr, William and Yoo, Christopher S., Lessons from GDPR for AI Policymaking (August 1, 2023). U of Penn Law School, Public Law Research Paper No. 23-32, Available at SSRN: https://ssrn.com/abstract=4528698 or http://dx.doi.org/10.2139/ssrn.4528698

Josephine Wolff (Contact Author)

The Fletcher School of Law and Diplomacy, Tufts University ( email )

160 Packard
Medford, MA 02155
United States

William Lehr

Massachusetts Institute of Technology (MIT) - Computer Science and Artificial Intelligence Laboratory (CSAIL) ( email )

Stata Center
Cambridge, MA 02142
United States

Christopher S. Yoo

University of Pennsylvania Carey Law School ( email )

3501 Sansom St.
Philadelphia, PA 19104-6204
United States
(215) 746-8772 (Phone)

HOME PAGE: http://www.law.upenn.edu/faculty/csyoo/

University of Pennsylvania - Annenberg School for Communication ( email )

3620 Walnut St.
Philadelphia, PA 19104-6220
United States
(215) 746-8772 (Phone)

University of Pennsylvania - School of Engineering and Applied Science ( email )

3330 Walnut St.
Philadelphia, PA 19104-6309
United States
(215) 746-8772 (Phone)

Do you have negative results from your research you’d like to share?

Paper statistics

Downloads
477
Abstract Views
1,127
Rank
104,378
PlumX Metrics