The Un-Modeled World: Law and the Limits of Machine Learning
MIT Computational Law Report, Vol. 4 (Forthcoming 2022)
24 Pages Posted: 12 Sep 2022 Last revised: 22 Sep 2022
Date Written: August 13, 2022
There is today a pervasive concern that humans will not be able to keep up with accelerating technological process in law and will become objects of sheer manipulation. For those who believe that human objectification is on the horizon, they offer solutions that require humans to take control, mostly by means of self-awareness and development of will. Among others, these strategies are present in Heidegger, Marcuse, and Habermas as presently discussed. But these solutions are not the only way. Technology itself offers a solution on its own terms. Machines can only learn if they can observe patterns, and those patterns must occur in sufficiently stable environments. Without detectable regularities and contextual invariance, machines remain prone to error. Yet humans innovate and things change. This means that innovation operates as a self-corrective—a built-in feature that limits the ability of technology to fully objectify human life and law error-free. Fears of complete technological ascendance in law and elsewhere are therefore exaggerated, though interesting intermediate states are likely to obtain. Progress will proceed apace in closed legal domains, but models will require continual adaptation and updating in legal domains where human innovation and openness prevail.
Keywords: computational law, machine learning, Heidegger, Marcuse, Habermas, Valiant
Suggested Citation: Suggested Citation