Taking AI Risks Seriously: a New Assessment Model for the AI Act

AI & Society, Springer, Vol.38, N. 3, 2023, https://doi.org/10.1007/s00146-023-01723-z

8 Pages Posted: 15 May 2023 Last revised: 13 Jul 2023

See all articles by Claudio Novelli

Claudio Novelli

University of Bologna- Department of Legal Studies; Yale University - Digital Ethics Center

Federico Casolari

Alma Mater Studiorum - University of Bologna

Antonino Rotolo

University of Bologna - Department of Legal Sciences

Mariarosaria Taddeo

University of Oxford - Oxford Internet Institute

Luciano Floridi

Yale University - Digital Ethics Center; University of Bologna- Department of Legal Studies

Date Written: May 14, 2023

Abstract

The EU Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. This problem is particularly challenging when it comes to regulating general-purpose AI (GPAI), which has versatile and often unpredictable applications. Recent amendments to the compromise text, though introducing context-specific assessments, remain insufficient. To address this, we propose applying the risk categories to specific AI scenarios, rather than solely to fields of application, using a risk assessment model that integrates the AIA with the risk approach arising from the Intergovernmental Panel on Climate Change (IPCC) and related literature. This integrated model enables the estimation of AI risk magnitude by considering the interaction between (a) risk determinants, (b) individual drivers of determinants, and (c) multiple risk types. We illustrate this model using large language models (LLMs) as an example.

Keywords: risk assessment, Artificial Intelligence, AI Act, climate change, EU, IPCC

Suggested Citation

Novelli, Claudio and Casolari, Federico and Rotolo, Antonino and Taddeo, Mariarosaria and Floridi, Luciano, Taking AI Risks Seriously: a New Assessment Model for the AI Act (May 14, 2023). AI & Society, Springer, Vol.38, N. 3, 2023, https://doi.org/10.1007/s00146-023-01723-z, Available at SSRN: https://ssrn.com/abstract=4447964 or http://dx.doi.org/10.2139/ssrn.4447964

Claudio Novelli (Contact Author)

University of Bologna- Department of Legal Studies ( email )

Via Zamboni 22
Bologna, Bo 40100
Italy

HOME PAGE: http://https://dsg.unibo.it/en

Yale University - Digital Ethics Center ( email )

85, Trumbull Street
New Haven, CT 06511
United States

HOME PAGE: http://https://dec.yale.edu

Federico Casolari

Alma Mater Studiorum - University of Bologna ( email )

Department of Legal Studies
via Zamboni, 27/29
Bologna, Bologna 40126
Italy
+39 051 20 9 9683 (Phone)

HOME PAGE: http://www.unibo.it/faculty/federico.casolari

Antonino Rotolo

University of Bologna - Department of Legal Sciences

Bologna
Italy

Mariarosaria Taddeo

University of Oxford - Oxford Internet Institute ( email )

1 St. Giles
University of Oxford
Oxford OX1 3PG Oxfordshire, Oxfordshire OX1 3JS
United Kingdom

Luciano Floridi

Yale University - Digital Ethics Center ( email )

85 Trumbull Street
New Haven, CT CT 06511
United States
2034326473 (Phone)

University of Bologna- Department of Legal Studies ( email )

Via Zamboni 22
Bologna, Bo 40100
Italy

HOME PAGE: http://www.unibo.it/sitoweb/luciano.floridi/en

Do you have negative results from your research you’d like to share?

Paper statistics

Downloads
1,843
Abstract Views
4,399
Rank
16,978
PlumX Metrics