Normative Challenges of Risk Regulation of Artificial Intelligence and Automated Decision-Making

39 Pages Posted: 21 Nov 2022

See all articles by Carsten Orwat

Carsten Orwat

Karlsruhe Institute of Technology - Institute for Technology Assessment and Systems Analysis (ITAS)

Jascha Bareis

Alexander von Humboldt Institute for Internet and Society; Karlsruhe Institute of Technology - Institute for Technology Assessment and Systems Analysis (ITAS)

Anja Folberth

Karlsruhe Institute of Technology - Institute for Technology Assessment and Systems Analysis (ITAS); Heidelberg University - Institute for Political Science

Jutta Jahnel

Karlsruhe Institute of Technology - Institute for Technology Assessment and Systems Analysis (ITAS)

Christian Wadephul

Karlsruhe Institute of Technology - Institute for Technology Assessment and Systems Analysis (ITAS)

Date Written: November 11, 2022

Abstract

Recent proposals aiming at regulating artificial intelligence (AI) and automated decision-making (ADM) suggest a particular form of risk regulation, i.e. a risk-based approach. The most salient example is the Artificial Intelligence Act (AIA) proposed by the European Commission. The article addresses challenges for adequate risk regulation that arise primarily from the specific type of risks involved, i.e. risks to the protection of fundamental rights and fundamental societal values. They result mainly from the normative ambiguity of the fundamental rights and societal values in interpreting, specifying or operationalising them for risk assessments. This is exemplified for (1) human dignity, (2) informational self-determination, data protection and privacy, (3) justice and fairness, and (4) the common good. Normative ambiguities require normative choices, which are distributed among different actors in the proposed AIA. Particularly critical normative choices are those of selecting normative conceptions for specifying risks, aggregating and quantifying risks including the use of metrics, balancing of value conflicts, setting levels of acceptable risks, and standardisation. To avoid a lack of democratic legitimacy and legal uncertainty, scientific and political debates are suggested.

Keywords: risk regulation, risk governance, artificial intelligence, automated decision-making, fundamental rights, human rights, operationalisation, standardisation, quantification

Suggested Citation

Orwat, Carsten and Bareis, Jascha and Folberth, Anja and Jahnel, Jutta and Wadephul, Christian, Normative Challenges of Risk Regulation of Artificial Intelligence and Automated Decision-Making (November 11, 2022). Available at SSRN: https://ssrn.com/abstract=4274828 or http://dx.doi.org/10.2139/ssrn.4274828

Carsten Orwat (Contact Author)

Karlsruhe Institute of Technology - Institute for Technology Assessment and Systems Analysis (ITAS) ( email )

Karlstraße 11
Karlsruhe, 76133
Germany

HOME PAGE: http://www.itas.kit.edu/kollegium_orwat_carsten.php

Jascha Bareis

Alexander von Humboldt Institute for Internet and Society ( email )

Bebelplatz 1 | 10099
Berlin
Germany

Karlsruhe Institute of Technology - Institute for Technology Assessment and Systems Analysis (ITAS) ( email )

Anja Folberth

Karlsruhe Institute of Technology - Institute for Technology Assessment and Systems Analysis (ITAS) ( email )

Karlstraße 11
Karlsruhe, 76133
Germany

Heidelberg University - Institute for Political Science ( email )

Bergheimer Str. 58
Heidelberg, 69115
Germany

Jutta Jahnel

Karlsruhe Institute of Technology - Institute for Technology Assessment and Systems Analysis (ITAS) ( email )

Christian Wadephul

Karlsruhe Institute of Technology - Institute for Technology Assessment and Systems Analysis (ITAS) ( email )

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
118
Abstract Views
778
Rank
504,211
PlumX Metrics