From Transparency to Justification: Toward Ex Ante Accountability for AI

Brooklyn Law School, Legal Studies Paper No. 712

Brussels Privacy Hub Working Paper, No. 33

27 Pages Posted: 23 May 2022 Last revised: 21 Jun 2022

See all articles by Gianclaudio Malgieri

Gianclaudio Malgieri

Universiteit Leiden, eLaw; Vrije Universiteit Brussel (VUB) - Faculty of Law

Frank Pasquale

Cornell Law School; Cornell Tech

Date Written: May 3, 2022

Abstract

At present, policymakers tend to presume that AI used by firms is legal, and only investigate and regulate when there is suspicion of wrongdoing. What if the presumption were flipped? That is, what if a firm had to demonstrate that its AI met clear requirements for security, non-discrimination, accuracy, appropriateness, and correctability, before it was deployed? This paper proposes a system of “unlawfulness by default” for AI systems, an ex-ante model where some AI developers have the burden of proof to demonstrate that their technology is not discriminatory, not manipulative, not unfair, not inaccurate, and not illegitimate in its legal bases and purposes. The EU’s GDPR and proposed AI Act tend toward a sustainable environment of AI systems. However, they are still too lenient and the sanction in case of non-conformity with the Regulation is a monetary sanction, not a prohibition. This paper proposes a pre-approval model in which some AI developers, before launching their systems into the market, must perform a preliminary risk assessment of their technology followed by a self-certification. If the risk assessment proves that these systems are at high-risk, an approval request (to a strict regulatory authority, like a Data Protection Agency) should follow. In other terms, we propose a presumption of unlawfulness for high-risk models, while the AI developers should have the burden of proof to justify why the AI is not illegitimate (and thus not unfair, not discriminatory, and not inaccurate). Such a standard may not seem administrable now, given the widespread and rapid use of AI at firms of all sizes. But such requirements could be applied, at first, to the largest firms’ most troubling practices, and only gradually (if at all) to smaller firms and less menacing practices.

Keywords: AI, accountability, justification, GDPR

Suggested Citation

Malgieri, Gianclaudio and Pasquale, Frank A., From Transparency to Justification: Toward Ex Ante Accountability for AI (May 3, 2022). Brooklyn Law School, Legal Studies Paper No. 712, Brussels Privacy Hub Working Paper, No. 33, Available at SSRN: https://ssrn.com/abstract=4099657 or http://dx.doi.org/10.2139/ssrn.4099657

Gianclaudio Malgieri (Contact Author)

Universiteit Leiden, eLaw ( email )

Steenschuur 25
Leiden, 2311
Netherlands

Vrije Universiteit Brussel (VUB) - Faculty of Law ( email )

Brussels
Belgium

HOME PAGE: http://www.vub.ac.be/LSTS/members/malgieri/

Frank A. Pasquale

Cornell Law School ( email )

Myron Taylor Hall
Ithaca, NY 14853

Cornell Tech ( email )

111 8th Avenue #302
New York, NY 10011
United States

Do you have negative results from your research you’d like to share?

Paper statistics

Downloads
910
Abstract Views
3,394
Rank
48,421
PlumX Metrics