Is it a Platform? Is it a Search Engine? It's Chat GPT! The European Liability Regime for Large Language Models
Journal of Free Speech Law, Vol. 3, Issue 2, 2023
34 Pages Posted: 25 Aug 2023
Date Written: August 12, 2023
Abstract
ChatGPT and other AI large language models (LLMs) raise many of the regulatory and ethical challenges familiar to AI and social media scholars: They have been found to confidently invent information and present it as fact. They can be tricked into providing dangerous information even when they have been trained to not answer some of those questions. Their ability to mimic a personalized conversation can be very persuasive, which creates important disinformation and fraud risks. They reproduce various societal biases because they are trained on data from the internet that embodies such biases, for example on issues related to gender and traditional work roles. Thus, like other AI systems, LLMs risk sustaining or enhancing discrimination perpetuating bias, and promoting the growth of corporate surveillance, while being technically and legally opaque. Like social media, LLMs pose risks associated with the production and dissemination of information online that raise the same kind of concern over the quality and content of online conversations and public debate. All these compounded risks threaten to distort political debate, affect democracy, and even endanger public safety. Additionally, OpenAI reported an estimated 100 million active users of ChatGPT in January 2023, which makes the potential for a vast and systemic impact of these risks a considerable one.
LLMs are also expected to have great potential. They will transform a variety of industries, freeing up professionals’ time to focus on different substantive matters. They may also improve access to various services by facilitating the production of personalized content, for example for medical patients or students. Consequently, one of the critical policy questions LLMs pose is how to regulate them so that some of these risks are mitigated while still encouraging innovation and allowing their benefits to be realized.
This Essay examines this question, with a focus on the liability regime for LLMs for speech and informational harms and risks in the European Union. Even though the AI Act has introduced some risk-mitigation obligations for this tool, we are still at least two years away from this becoming mandatory. That's too long from now. However, because many of the risks these systems raise are risks to the information ecosystem this Essay argues they can and should be addressed, at the outset, with current content moderation law. The Essay thus proposes an interpretation of the newly enacted Digital Services Act that could apply to these tools when they are released in the market in a way that strongly resembles other intermediaries covered by content moderation laws, such as search engines.
Note: This is an Accepted Manuscript of an article published in the Journal of Free Speech Law originally available online at: https://www.journaloffreespeechlaw.org/boteroarcila.pdf
Keywords: ChatGPT, DSA, European Union, Freedom of Speech, Freedom of Expression, Artificial Intelligence, Intermediate Liability Law, Misinformation, Regulation, Information Environment
Suggested Citation: Suggested Citation