Artificial Intelligence Act: A Policy Prototyping Experiment: Operationalizing the Requirements for AI Systems – Part I

Andrade, Norberto Nuno Gomes de, and Antonella Zarra. 'Artificial Intelligence Act: A Policy Prototyping Experiment: Operationalizing the Requirements for AI Systems – Part I' (2022), at https://openloop.org/reports/2022/11/Artificial_Intelligence_Act_A_Policy_Prototyping_Experiment_Operationalizing

76 Pages Posted: 27 Feb 2023

See all articles by Norberto Nuno Gomes de Andrade

Norberto Nuno Gomes de Andrade

IE Law School; Stanford Law School, Center for Internet & Society

Antonella Zarra

University of Hamburg, Institute of Law and Economics; Erasmus University Rotterdam (EUR), Erasmus School of Law, Rotterdam Institute of Law and Economics, Students; Bologna University, Department of Economics

Date Written: November 2022

Abstract

This report presents the findings and recommendations of the first part of the Open Loop’s policy prototyping program on the European Artificial Intelligence Act, which was rolled out in Europe from June 2022 to July 2022 and in partnership with Estonia’s Ministries of Economic Affairs and Communications and Justice and the Malta Digital Innovation Authority (MDIA).

We enlisted 53 AI companies to participate in the Open Loop Forum (OLF), a dedicated online platform where they met to discuss topics and complete several research-related tasks. Over a period of three weeks (from June 2022 to July 2022), participants were invited to provide their feedback and views on selected articles of the AI Act:

- Taxonomy of AI actors (Article 3) - Risk management (Article 9)
- Data quality requirements (Article 10) - Technical documentation (Article 11)
- Transparency and human oversight (Articles 13 and 14)
- Regulatory Sandboxes (Article 53)

The majority of the participants found that the provisions were clear and feasible and could contribute to one of the goals of the legislator: to build and deploy trustworthy AI. However, there were several areas in the AIA with room for improvement and some provisions that might even hinder the other goal of the legislator: enabling the uptake of AI in Europe.

Based on the results of the prototyping exercise, the report provides the legislator with a number of recommendations to improve the clarity, feasibility and effectiveness of the AI Act.

1. Consider revising/expanding the taxonomy of AI actors in Article 3 and/or more accurately describe possible interactions between actors (e.g., co-production of AI systems and use of open-source tooling) to more accurately reflect the AI ecosystem.

2. Given the difficulty in assessing "reasonably foreseeable misuse" (Article 9) and the limited focus on the impact of risks, provide guidance on risks and risk assessment, in particular for startups and SMEs.

3. Provide more concrete guidance, methodologies, and/or metrics for assessing the data quality requirements through, e.g., subordinate legislation and/or soft law instruments, standardization, or guidance from the regulator (Article 10).

4. Revise the data quality requirements "error free" and "complete" as they are considered unrealistic and unfeasible (Article 10).

5. Provide more concrete guidance, templates, and/or metrics for the technical documentation through, e.g., subordinate legislation and/or soft law instruments, standardization, or guidance from the regulator (Article 11).

6. Avoid a situation where the requirement for technical documentation becomes a "paper tiger" by ensuring sufficient and sufficiently qualified staff to actually assess the technical documentation (Article 11).

7. Consider distinguishing more clearly between different audiences for explanations and other transparency requirements (Articles 13 and 14) in the AIA.

8. The AIA’s success hinges on the ability to execute and enforce the regulation. Therefore, it is important to ensure that the future workforce contains enough qualified workers, in particular when it comes to human oversight of AI (Article 14).

9. Maximize the potential of regulatory sandboxes to foster innovation, strengthen compliance, and improve regulation. Ensure that, through implementing acts and guidance, conditions for effective AI regulatory sandboxes are created (e.g., collaboration, transparency, guidance and legal certainty, and protection from enforcement) (Article 53).

Keywords: Artificial Intelligence, European Union, AI companies, Artificial Intelligence Act, AI Act, AI governance, AI regulation, policy prototyping, regulatory sandboxes, risk assessment, AI transparency, AI risk management, human oversight, AI standards, experimental regulation, data quality

JEL Classification: K23, L86, O33, O38

Suggested Citation

Andrade, Norberto Nuno Gomes de and Zarra, Antonella, Artificial Intelligence Act: A Policy Prototyping Experiment: Operationalizing the Requirements for AI Systems – Part I (November 2022). Andrade, Norberto Nuno Gomes de, and Antonella Zarra. 'Artificial Intelligence Act: A Policy Prototyping Experiment: Operationalizing the Requirements for AI Systems – Part I' (2022), at https://openloop.org/reports/2022/11/Artificial_Intelligence_Act_A_Policy_Prototyping_Experiment_Operationalizing, Available at SSRN: https://ssrn.com/abstract=4365515

Norberto Nuno Gomes de Andrade

IE Law School ( email )

Madrid
Spain

Stanford Law School, Center for Internet & Society ( email )

559 Nathan Abbott Way
Stanford, CA 94305-8610
United States

Antonella Zarra (Contact Author)

University of Hamburg, Institute of Law and Economics ( email )

Hamburg
Germany

Erasmus University Rotterdam (EUR), Erasmus School of Law, Rotterdam Institute of Law and Economics, Students ( email )

Burgemeester Oudlaan 50
PO Box 1738
Rotterdam
Netherlands

HOME PAGE: http://https://www.eur.nl/en/esl/research/areas/institutes/law-and-economics/staff

Bologna University, Department of Economics ( email )

Bologna
Italy

HOME PAGE: http://https://www.unibo.it/sitoweb/antonella.zarra2/en

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
145
Abstract Views
453
Rank
371,080
PlumX Metrics