Smile for the Camera: Privacy and Policy Implications of Emotion AI

12 Pages Posted: 31 Mar 2017 Last revised: 16 Aug 2017

See all articles by Elaine Sedenberg

Elaine Sedenberg

University of California, Berkeley

John Chuang

University of California, Berkeley - School of Information

Date Written: March 30, 2017


We are biologically programmed to publicly display emotions as social cues and involuntary physiological reflexes: grimaces of disgust alert others to poisonous food, pursed lips and furrowed brows warn of mounting aggression, and spontaneous smiles relay our joy and friendship. Though designed to be public under evolutionary pressure, these signals were only seen within a few feet of our compatriots — purposefully fleeting, fuzzy in definition, and rooted within the immediate and proximate social context.

The introduction of artificial intelligence (AI) on visual images for emotional analysis obliterates the natural subjectivity and contextual dependence of our facial displays. This technology may be easily deployed in numerous contexts by diverse actors for purposes ranging from nefarious to socially assistive — like proposed autism therapies. Emotion AI places itself as an algorithmic lens on our digital artifacts and real-time interactions, creating the illusion of a new, objective class of data: our emotional and mental states. Building upon a rich network of existing public photographs — as well as fresh feeds from surveillance footage or smart phone cameras — these emotion algorithms require no additional infrastructure or improvements on image quality.

Privacy and security implications stemming from the collection of emotional surveillance are unprecedented — especially when taken alongside other signals including physiological biosignals (e.g., heartrate or body temperature). Emotion AI also presents new methods to manipulate individuals by targeting political propaganda or fish for passwords based on micro-reactions. The lack of transparency or notice on these practices makes public inquiry unlikely, if not impossible.

In order to examine the potential policy and legal remedies for emotion AI as an emerging technology, we first establish a framework of actors, collection motivations, time scales, and space considerations that differentiates emotion AI from other algorithmic lenses. Each of these elements influences available policy remedies, and should shape continuing discussions on the antecedent conditions that make emotional AI acceptable or not in particular contexts.

Emotion analysis has great potential to add on to existing digital infrastructure with ease and result in a variety of benefits and risks. Based on our framework of unique elements, we examine potential available policy remedies to prevent or remediate harm. Specifically, our paper looks toward the regulatory role of the Federal Trade Commission in the US, gaps in the EU’s General Data Protection Regulation (GDPR) allowing for emotion data collection, and precedent set by polygraph technologies in evidentiary and use restrictions set by law. We also examine the way social norms and adaptations could grow to also modulate broader use. Given the challenges in controlling the flow of these data, we call for further research and attention as emotion AI technology remains poised for adoption.

Keywords: privacy; artificial intelligence; technology policy

Suggested Citation

Sedenberg, Elaine and Chuang, John, Smile for the Camera: Privacy and Policy Implications of Emotion AI (March 30, 2017). Available at SSRN:

Elaine Sedenberg (Contact Author)

University of California, Berkeley ( email )

102 South Hall #4600
Berkeley, CA 94720
United States


John Chuang

University of California, Berkeley - School of Information ( email )

102 South Hall
Berkeley, CA 94720-4600
United States

Register to save articles to
your library


Paper statistics

Abstract Views
PlumX Metrics