Where Residual Risks Reside: A Comparative Approach to Art 9(4) of the European Union's Proposed AI Regulation
29 Pages Posted: 10 Nov 2021 Last revised: 22 Nov 2021
Date Written: September 30, 2021
Abstract
This paper explores the question of how to judge the acceptability of “residual risks” in the European Union’s Proposal for a Regulation laying down harmonized rules on Artificial Intelligence (the Proposal). The Proposal is a risk-based regulation that prohibits certain uses of AI and imposes several layers of risk controls upon ‘high-risk’ AI systems. Much of the commentary on the Proposal has focused on the issue of what AI-systems should be prohibited, and which should be classified as high risk. This paper bypasses this threshold question, engaging instead with a key issue of implementation.
The Proposal imposes a wide range of requirements on providers of high-risk AI systems (among others) but acknowledges that certain AI systems would still carry a level of “residual risk” to health, safety and fundamental rights. Art 9(4) provides that, in order for high-risk systems to be put into use, risk management measures must be such that residual risks are judged “acceptable”.
Participants in the AI supply chain need certainty about what degree of care and precaution in AI development, and in risk management specifically, will satisfy the requirements of Art 9(4).
This paper advocates for a cost-benefit approach to art 9(4). It argues that art 9(4), read in context, calls for proportionality between precautions against risks posed by high-risk AI systems and the risks themselves, but leaves those responsible for implementing art 9(4) in the dark about how to achieve such proportionality. This paper identifies potentially applicable mid-level principles both in European laws (such as medical devices regulation) and in laws about the acceptability of precaution in relation to risky activities from common law countries (particularly negligence and workplace health and safety). It demonstrates how these principles would apply to different kinds of systems with different risk and benefit profiles, with hypothetical and real -world examples. And it sets out some difficult questions that arise in weighing the costs and benefits of precautions, calling on European policy-makers to provide more clarity to stakeholders on how they should answer those questions.
Keywords: artificial intelligence, AI, high-risk, fundamental rights, safety, negligence, workplace health and safety, common law, comparative law, AI Act, European Union, standard of care, residual risk, risk management
Suggested Citation: Suggested Citation