Human Oversight under Article 14 of the EU AI Act
15 Pages Posted: 22 Apr 2025 Last revised: 21 Feb 2025
There are 2 versions of this paper
Human Oversight under Article 14 of the EU AI Act
Human Oversight under Article 14 of the EU AI Act
Date Written: February 14, 2025
Abstract
This chapter analyses the human oversight requirement for high-risk AI systems under Article 14 AI Act, a provision that marks a significant development in AI governance. Emerging from the EU's broader vision of human-centric AI, Article 14 takes a uniquely comprehensive approach through its applicability to all high-risk AI systems no matter the sector, context, or role an AI system plays within a work-flow. Human oversight in the AI Act serves a number of purposes. It is primarily aimed at preventing risks to health, safety, and fundamental rights through output correction, while also pursuing broader objectives of preserving human agency and building trust in AI systems. To achieve these aims, Article 14 requires providers to create the technical and operational conditions for effective oversight. This is complemented by Article 26(2) AI Act, which requires deployers to assign qualified personnel with appropriate authority, competence, and support. Empirical evidence suggests significant limitations to human oversight's effectiveness, including due to humans’ cognitive constraints and automation bias. This also means that Article 14’s success requires careful implementation that acknowledges these limitations and avoids overreliance on human oversight as a standalone safeguard.
Keywords: Human oversight, EU AI Act, Human-in-the-loop, artificial intelligence regulation, algorithmic governance, algorithmic accountability, algorithmic decision-making, automation bias, risk management, effective human oversight, meaningful human oversight
Suggested Citation: Suggested Citation