Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies

48 Pages Posted: 31 May 2015 Last revised: 22 Jan 2024

Date Written: May 30, 2015


Artificial intelligence technology (or AI) has developed rapidly during the past decade, and the effects of the AI revolution are already being keenly felt in many sectors of the economy. A growing chorus of commentators, scientists, and entrepreneurs has expressed alarm regarding the increasing role that autonomous machines are playing in society, with some suggesting that government regulation may be necessary to reduce the public risks that AI will pose. Unfortunately, the unique features of AI and the manner in which AI can be developed present both practical and conceptual challenges for the legal system. These challenges must be confronted if the legal system is to positively impact the development of AI and ensure that aggrieved parties receive compensation when AI systems cause harm. This article will explore the public risks associated with AI and the competencies of government institutions in managing those risks. It concludes with a proposal for an indirect form of AI regulation based on differential tort liability.

Keywords: Law, Artificial Intelligence, Emerging Technologies, Regulation

JEL Classification: K29, K49, O32, O38

Suggested Citation

Scherer, Matthew U., Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies (May 30, 2015). Harvard Journal of Law & Technology, Vol. 29, No. 2, Spring 2016, Available at SSRN: or

Matthew U. Scherer (Contact Author)

Center for Democracy & Technology ( email )

1401 K St NW
Ste 200
Washington, DC 20005
United States

Do you have negative results from your research you’d like to share?

Paper statistics

Abstract Views
PlumX Metrics