Defining the Scope of AI Regulations

Forthcoming in Law, Innovation and Technology

Legal Priorities Project Working Paper Series No. 9

32 Pages Posted: 24 Sep 2019 Last revised: 9 Dec 2022

See all articles by Jonas Schuett

Jonas Schuett

Centre for the Governance of AI; Legal Priorities Project; Goethe University Frankfurt

Date Written: August 22, 2021


The paper argues that the material scope of AI regulations should not rely on the term "artificial intelligence (AI)". The argument is developed by proposing a number of requirements for legal definitions, surveying existing AI definitions, and then discussing the extent to which they meet the proposed requirements. It is shown that existing definitions of AI do not meet the most important requirements for legal definitions. Next, the paper argues that a risk-based approach would be preferable. Rather than using the term AI, policy makers should focus on the specific risks they want to reduce. It is shown that the requirements for legal definitions can be better met by defining the main sources of relevant risks: certain technical approaches (e.g. reinforcement learning), applications (e.g. facial recognition), and capabilities (e.g. the ability to physically interact with the environment). Finally, the paper discusses the extent to which this approach can also be applied to more advanced AI systems.

Keywords: AI regulation, scope of application, legal definition of AI, risk-based regulation

Suggested Citation

Schuett, Jonas, Defining the Scope of AI Regulations (August 22, 2021). Forthcoming in Law, Innovation and Technology, Legal Priorities Project Working Paper Series No. 9, Available at SSRN: or

Jonas Schuett (Contact Author)

Centre for the Governance of AI ( email )

United Kingdom


Legal Priorities Project ( email )

1427 Cambridge St
Cambridge, MA 02139
United States
02139 (Fax)


Goethe University Frankfurt ( email )


Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Abstract Views
PlumX Metrics