Beyond Automation: Machine Learning-Based Systems and Human Behavior in the Personalization Economy
54 Pages Posted: 14 Dec 2021
Date Written: December 6, 2021
Personalization has long been a feature of online services, shaping targeted advertisement and online manipulation. Corporations now seek to exploit the enormous economic potential of personalization beyond the confinements of the online space. In recent years, the proliferation of machine learning-based decision-making has led to personalization in all spheres of life.
Corporations rely on machine learning-based systems to decide if and under what conditions they contract with individuals. They determine who is invited for job interviews and who is eligible for loans. They shape how we are perceived and tailor the way in which we are treated. The implications of these systems are already immense, and they foreshadow a larger transformation. Over the course of the 21st century, ubiquitous, machine learning-based personalization will likely permeate the economy and become a fundamental condition of human existence. The shape of this transformation is still uncertain; before it concretizes, we have the opportunity to guide its direction by articulating concepts that allow us to describe and critically examine it.
Legal scholarship needs a conceptual foundation to address urgent questions about how personalization, driven by machine learning-based decision-making, affects liberty and other liberal democratic values. This article draws from surveillance theory and develops that foundation. It constructs a novel approach to examine the normative and constitutional implications of machine learning-based decision systems and ubiquitous personalization. The article builds on the concepts of panopticism and the surveillance assemblage to analyze how corporate machine learning-based decision-making affects the lives of individuals and transforms society. It is the first to develop an account of how ubiquitous personalization influences human agency and behavior. The article describes how machine learning-based decision systems amplify corporate power. It provides theoretical support for what Jack Balkin calls “normalization (or regimentation)”–the idea that algorithmic evaluations and decisions will govern human behavior. The article shows that existing legal responses focusing on rights, explainability, and transparency fail to prevent the already fragile balance of power between individuals and corporations from tipping in corporations’ interest. It argues that legal scholarship, to adequately respond to machine learning-based decision-making, must overcome its individualistic focus and engage in a debate on the legitimacy of corporate surveillance.
Finally, the article explains why and how we should measure legal responses to machine learning-based decision-making against standards of legitimacy. The notion of legitimacy provides a foundation to tackle one of the great challenges with which machine learning-based systems confront liberal democracies—which is to reconcile corporate power with the values and freedoms central to these democracies. A legitimacy focus suggests that for legal responses to be adequate, they must not only educate individuals on the functions of algorithmic systems and endow them with legal rights. Rather, they must also entail general principles, such as data minimization and limitation, shaping the conditions based on which the personalization economy operates.
Keywords: AI, Big Data, Automated Decision-Making, Machine Learning, Personalization, Legitimacy, Surveillance Studies
Suggested Citation: Suggested Citation