From Transparency to Justification: Toward Ex Ante Accountability for AI

Brooklyn Law School, Legal Studies Paper No. 712

Brussels Privacy Hub Working Paper, No. 33

27 Pages Posted: 23 May 2022 Last revised: 21 Jun 2022

See all articles by Gianclaudio Malgieri

Gianclaudio Malgieri

Universiteit Leiden, eLaw; Vrije Universiteit Brussel (VUB) - Faculty of Law

Frank A. Pasquale

Brooklyn Law School

Date Written: May 3, 2022

Abstract

At present, policymakers tend to presume that AI used by firms is legal, and only investigate and regulate when there is suspicion of wrongdoing. What if the presumption were flipped? That is, what if a firm had to demonstrate that its AI met clear requirements for security, non-discrimination, accuracy, appropriateness, and correctability, before it was deployed? This paper proposes a system of “unlawfulness by default” for AI systems, an ex-ante model where some AI developers have the burden of proof to demonstrate that their technology is not discriminatory, not manipulative, not unfair, not inaccurate, and not illegitimate in its legal bases and purposes. The EU’s GDPR and proposed AI Act tend toward a sustainable environment of AI systems. However, they are still too lenient and the sanction in case of non-conformity with the Regulation is a monetary sanction, not a prohibition. This paper proposes a pre-approval model in which some AI developers, before launching their systems into the market, must perform a preliminary risk assessment of their technology followed by a self-certification. If the risk assessment proves that these systems are at high-risk, an approval request (to a strict regulatory authority, like a Data Protection Agency) should follow. In other terms, we propose a presumption of unlawfulness for high-risk models, while the AI developers should have the burden of proof to justify why the AI is not illegitimate (and thus not unfair, not discriminatory, and not inaccurate). Such a standard may not seem administrable now, given the widespread and rapid use of AI at firms of all sizes. But such requirements could be applied, at first, to the largest firms’ most troubling practices, and only gradually (if at all) to smaller firms and less menacing practices.

Keywords: AI, accountability, justification, GDPR

Suggested Citation

Malgieri, Gianclaudio and Pasquale, Frank A., From Transparency to Justification: Toward Ex Ante Accountability for AI (May 3, 2022). Brooklyn Law School, Legal Studies Paper No. 712, Brussels Privacy Hub Working Paper, No. 33, Available at SSRN: https://ssrn.com/abstract=4099657 or http://dx.doi.org/10.2139/ssrn.4099657

Gianclaudio Malgieri (Contact Author)

Universiteit Leiden, eLaw ( email )

Steenschuur 25
Leiden, 2311
Netherlands

Vrije Universiteit Brussel (VUB) - Faculty of Law ( email )

Brussels
Belgium

HOME PAGE: http://www.vub.ac.be/LSTS/members/malgieri/

Frank A. Pasquale

Brooklyn Law School ( email )

250 Joralemon Street
Brooklyn, NY 11201
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
295
Abstract Views
1,001
rank
151,954
PlumX Metrics