EU Artificial Intelligence Act: The European Approach to AI
Stanford - Vienna Transatlantic Technology Law Forum, Transatlantic Antitrust and IPR Developments, Stanford University, Issue No. 2/2021. https://law.stanford.edu/publications/eu-artificial-intelligence-act-the-european-approach-to-ai/
11 Pages Posted: 18 Nov 2021
Date Written: September 21, 2021
On 21 April 2021, the European Commission presented the Artificial Intelligence Act. This Stanford Law School contribution lists the main points of the proposed regulatory framework for AI.
The draft regulation seeks to codify the high standards of the EU trustworthy AI paradigm. It sets out core horizontal rules for the development, trade and use of AI-driven products, services and systems within the territory of the EU, that apply to all industries.
The EU AI Act introduces a sophisticated ‘product safety regime’ constructed around a set of 4 risk categories. It imposes requirements for market entrance and certification of High-Risk AI Systems through a mandatory CE-marking procedure. This pre-market conformity regime also applies to machine learning training, testing and validation datasets.
The AI Act draft combines a risk-based approach based on the pyramid of criticality, with a modern, layered enforcement mechanism. This means that as risk increases, stricter rules apply. Applications with an unacceptable risk are banned. Fines for violation of the rules can be up to 6% of global turnover for companies.
The EC aims to prevent the rules from stifling innovation and hindering the creation of a flourishing AI ecosystem in Europe, by introducing legal sandboxes that afford breathing room to AI developers.
The new European rules will forever change the way AI is formed. Pursuing trustworthy AI by design seems like a sensible strategy, wherever you are in the world.
Keywords: EU AI Act, European Commission, Law of AI, Trustworthy AI by Design, Risk-based Approach, Pyramid of Criticality, Product Safety Regime, CE-marking, Certification, Conformity, Audits, ML Training Datasets, Horizontal Rules, Impact Assessments, Values, Enforcement, Fines, Legal Sandboxes, Innovation
JEL Classification: O24, O31, O32, O33, O34, O35, O38, O39, K11, K12, K39, F13, Z18
Suggested Citation: Suggested Citation