From Individual Control to Social Protection: New Paradigms for Privacy Law in the Age of Predictive Analytics

66 Pages Posted: 6 Sep 2019 Last revised: 1 Oct 2019

See all articles by Dennis D. Hirsch

Dennis D. Hirsch

Ohio State University (OSU) - Michael E. Moritz College of Law; Capital University Law School

Date Written: February 1, 2019


What comes after the control paradigm? For decades, privacy law has sought to provide individuals with notice and choice and so give them control over their personal data. But what happens when this regulatory paradigm breaks down?

Predictive analytics forces us to confront this challenge. Individuals cannot understand how predictive analytics uses their surface data to infer latent, far more sensitive data about them. This prevents individuals from making meaningful choices about whether to share their surface data in the first place. It also creates threats (such as harmful bias, manipulation and procedural unfairness) that go well beyond the privacy interests that the control paradigm seeks to safeguard. In order to protect people in the algorithmic economy, privacy law must shift from a liberalist legal paradigm that focuses on individual control, to one in which public authorities set substantive standards that defend people against algorithmic threats.

Leading scholars such as Jack Balkin (information fiduciaries), Helen Nissenbaum (contextual integrity), Danielle Citron (technological due process), Craig Mundie (use-based regulation) and others recognize the need for such a shift and propose ways to achieve it. This article ties these proposals together, views them as attempts to define a new regulatory paradigm for the age of predictive analytics, and evaluates whether each achieves this aim.

It then argues that the solution may be hiding in plain sight in the form of the FTC’s Section 5 unfairness authority. It explores whether the FTC could use its unfairness authority to draw substantive lines between data analytics practices that are socially appropriate and fair, and those that are inappropriate and unfair, and examines how the Commission would make such determinations. It argues that this existing authority, which requires no new legislation, provides a comprehensive and politically legitimate way to create much needed societal boundaries around corporate use of predictive analytics. It concludes that the Commission could use its unfairness authority to protect people from the threats that the algorithmic economy creates.

Keywords: Predictive Analytics, Advanced Analytics, Data Analytics, Big Data, Algorithm, Federal Trade Commission, Unfairness, Privacy, Bias, Algorithmic Bias, Discrimination, Manipulation, Technological Due Process,

JEL Classification: K2, K29, K39, M15, Z18

Suggested Citation

Hirsch, Dennis, From Individual Control to Social Protection: New Paradigms for Privacy Law in the Age of Predictive Analytics (February 1, 2019). Maryland Law Review, Forthcoming, Ohio State Public Law Working Paper No. 506, Available at SSRN: or

Dennis Hirsch (Contact Author)

Ohio State University (OSU) - Michael E. Moritz College of Law ( email )

55 West 12th Avenue
Columbus, OH 43210
United States

Capital University Law School ( email )

303 East Broad St.
Columbus, OH 43215

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Abstract Views
PlumX Metrics