Acceptable Risks in Europe’s Proposed AI Act: Reasonableness and Other Principles for Deciding How Much Risk Management Is Enough
European Journal of Risk Regulation (2023)
16 Pages Posted: 27 Jul 2023 Last revised: 10 Aug 2023
Date Written: July 9, 2023
Abstract
This paper critically evaluates the European Commission's proposed AI Act’s approach to risk management and risk acceptability for high-risk AI systems that pose risks to fundamental rights and safety. The Act aims to promote “trustworthy” AI with a proportionate regulatory burden. Its provisions on risk acceptability require residual risks from high-risk systems to be reduced or eliminated “as far as possible” (AFAP), having regard to the “state of the art”. This criterion, especially if interpreted narrowly, is unworkable and promotes neither proportionate regulatory burden, nor trustworthiness. By contrast the Parliament’s most recent draft amendments to the risk management provisions introduce “reasonableness”, cost-benefit analysis, and are more transparent about the value-laden and contextual nature of risk acceptability judgements. This paper argues that the Parliament’s approach is more workable, and better balances the goals of proportionality and trustworthiness. It explains what reasonableness in risk acceptability judgments would entail, drawing on principles from negligence law and European medical devices regulation. And it contends that the approach to risk acceptability judgments need a firm foundation of civic legitimacy: including detailed guidance or involvement from regulators, and meaningful input from affected stakeholders
Keywords: artificial intelligence, AI Act, risk management, risk acceptability, high-risk AI systems, standards, assurance, fundamental rights, reasonableness, reasonable care, cost benefit analysis, negligence, medical devices, as far as possible, AFAP, as low as reasonably practicable, ALARP
Suggested Citation: Suggested Citation