65 Pages Posted: 15 Feb 2023 Last revised: 13 Mar 2023
Date Written: February 11, 2023
ChatGPT is a prominent example of how Artificial Intelligence (AI) has stormed into our lives. Within a matter of weeks, this new AI—which produces coherent and human-like textual answers to questions—has managed to become an object of both admiration and anxiety. Can we trust generative AI systems, such as ChatGPT, without regulatory oversight?
Designing an effective legal framework for AI requires answering three main questions: (i) is there a market failure that requires legal intervention? (ii) should AI be governed through public regulation, tort liability, or a mixture of both? and (iii) should liability be based on strict liability or a fault-based regime such as negligence? The law and economics literature offers clear considerations for these choices, focusing on the incentives of injurers and victims to take precautions, engage in efficient activity levels, and acquire information.
This Article is the first to comprehensively apply these considerations to ChatGPT as a leading test case. As the United States is lagging behind in its response to the AI revolution, I focus on the recent proposals in the European Union to restrain AI systems, which apply a risk-based approach and combine regulation and liability. The analysis reveals that this approach does not map neatly onto the relevant distinctions in law and economics, such as market failures, unilateral versus bilateral care, and known versus unknown risks. Hence, the existing proposals may lead to various incentive distortions and inefficiencies. The Article, therefore, calls upon regulators to place a stronger emphasis on law and economics concepts in their design of AI policy.
Keywords: ChatGPT, Artificial Intelligence, Liability, Regulation, Strict Liability, Negligence
JEL Classification: K13, L51, O32
Suggested Citation: Suggested Citation