Developing a Legal Framework for Regulating Emotion AI

46 Pages Posted: 23 Sep 2020 Last revised: 19 Jan 2021

See all articles by Jennifer Bard

Jennifer Bard

University of Cincinnati - College of Law

Date Written: August 21, 2020

Abstract

Of all the pressing issues facing the world in the summer of 2020, it might not appear as if establishing a code of ethics for the ethical use of Artificial Intelligence (AI) technology could be put at the same level as combatting Covid-19 or addressing racism. But a team of researchers at the Centre for the Study of Existential Risk at the University Cambridge (the Cambridge Team) trace the harm caused by these events directly to the increased use of AI assisted decision making.

In July 2020 they put out an urgent call to “technologists, ethicists, policymakers and healthcare professionals” asking them “to consider how ethics can be implemented at speed in the ongoing response to the COVID-19 crisis.” Although the Cambridge team did not call for help from law professors, any effort to develop an enforceable AI code of ethics will face significant legal barriers unless we can better articulate the harms attributable to the newest iteration of AI, Emotion AI. Based on a theory that human emotions can be detected through analysis of facial expressions, Emotion AI has become a multi-billion dollar industry and has already been adopted by every company you may have ever heard of for market research, digital interviewing, and interviewing prospective employees. Not only does Emotion AI claim it can read emotions, but in a chilling throwback to murderous computers in 2001 or romantic partners in Her, it claims to convincingly generate emotional responses in its interactions with humans. While I do not purport to provide a code of emotion AI ethics, or more accurately to choose among the many proposed codes that already exist, this paper looks directly at the legal barriers to enforcing such a code when without a shared understanding of what harm is caused by either detecting or imitating feelings and proposes a framework for addressing these harms in the context of existing legal remedies related to protecting against invasions of privacy by both public and private entities. The longstanding concerns that AI assisted decision making recapitulates the biases of society in ways that are difficult to detect and mitigate are still very much with us. But in this article I both make the case that the potential harm of Emotion AI is even more significant because it is so much more difficult to define and create a framework for looking at that harm within the existence of existing legal remedies.

Suggested Citation

Bard, Jennifer S., Developing a Legal Framework for Regulating Emotion AI (August 21, 2020). 27 B.U. J. Sci. & Tech. L. (2020), University of Florida Levin College of Law Research Paper No. 21-1, Available at SSRN: https://ssrn.com/abstract=3680909 or http://dx.doi.org/10.2139/ssrn.3680909

Jennifer S. Bard (Contact Author)

University of Cincinnati - College of Law

P.O. Box 210040
Cincinnati, OH 45221-0040
United States

Do you have negative results from your research you’d like to share?

Paper statistics

Downloads
523
Abstract Views
1,937
Rank
99,231
PlumX Metrics