Between risk management and proportionality: The risk-based approach in the EU’s Artificial Intelligence Act Proposal
Nordic Yearbook of Law and Informatics
24 Pages Posted: 6 Jan 2022
Date Written: September 30, 2021
The European Commission has issued a proposal for an Artificial Intelligence (AI) Regulation (hereinafter ‘Proposal’ or Artificial Intelligence Act, AIA), laying down harmonised rules concerning certain AI systems in the European Union (EU). A key regulatory approach in the Proposal is that the development and use of AI systems are regulated based on risk level. The Proposal’s ‘risk-based approach’ consists of the use of risk levels as thresholds for specific requirements in the Proposal. AI systems that represent unacceptable risks are prohibited and high-risk systems must comply with specific requirements. Less risky systems must comply with fewer or no requirements.
The Proposal explicitly aims to manage the risks of AI systems employed in the EU, so risks are its main object and justification. Therefore, the Proposal emphasises the need to establish rules that are proportionate and effective. To achieve this proportionality, the risk-based approach utilises risk levels (e.g., ‘high risk’) to trigger requirements for AI systems. Thus, key parts of the Proposal merge risk thinking with rulemaking. The word ‘risk’ occurs 344 times in the Proposal and many more times in the accompanying Explanatory Memorandum, Annexes and Impact Assessment. Risk is also emphasised in some of the literature focussing on the regulation of AI. Therefore, it comes as no surprise that many elements of the Proposal are in some sense ‘risk-based’. Nevertheless, only one is identified as the ‘clearly defined risk-based approach’. The purpose of this contribution is to distinguish the so-called ‘clearly defined risk-based approach’ from the other risk-based approaches used in the Proposal and to unpack whether the concept of risk is applied in the Proposal in a coherent, logical and consistent way.
The ‘clearly defined’ risk-based approach raises questions about its aim, logic and limitations. What exactly characterises the approach? Is this an example of the European lawmaker engaging in a formalised risk management process by identifying, analysing and treating risk? Indeed, at some level, this seems to be the case: the EU identifies AI risks as a regulatory concern, distinguishing various risk levels and proposing law to manage these risks. This could be seen as the lawmaker’s attempt to act more rationally in that it employs a rigorous risk management approach. However, on closer examination, there are indications that the risk-based approach is not as rigorous as it might initially appear. Ultimately, this paper considers what problem, if any, the risk-based approach seeks to solve. It suggests that the problem to be solved by the approach is not primarily how to manage AI risks, but how to avoid a potentially over-broad scope of the regulation—a potential created by the broad definition of AI included in the Proposal. An alternative to applying it would have been a blanket regulation of all AI, which might have introduced excessive obligations on AI producers and users, disproportionately hampering the development of societally desired and economically lucrative AI. Paradoxically, the risk-based approach’s aim and utility are not primarily to manage risk but instead to ensure legislative proportionality.
The paper primarily aims to analyse the Proposal, but in doing so, it also introduces, presents and describes part of the Proposal, as not all readers will have studied it in detail. Moreover, the law-making process may move on from where it is at the time of writing, so it may be useful to document some key features of the current Proposal, which forms the starting point of this paper.
Keywords: Artificial intelligence, law, risk management, regulation, EU, policy, law-making
Suggested Citation: Suggested Citation