Privacy As Protection of the Incomputable Self: From Agnostic to Agonistic Machine Learning
Forthcoming in Theoretical Inquiries of Law 2019, 19(1)
33 Pages Posted: 8 Dec 2017 Last revised: 1 Oct 2018
Date Written: December 3, 2017
This paper takes the perspective of law and philosophy, integrating insights from computer science. First, I will argue that in the era of big data analytics we need an understanding of privacy that is capable of protecting what is uncountable, incalculable or incomputable about individual persons. To instigate this new dimension of the right to privacy I expand previous work on the relational and ecological nature of privacy and the productive indeterminacy of human identity. Second, I will explain that this does not imply a rejection of machine learning, based on a more in-depth study of the assumptions, operations and implications of the practice of machine learning – highlighting its alignment with purpose limitation as core to its methodological integrity. Instead of rejecting machine learning, I advocate a practice of ‘agonistic machine learning’ as core to scientifically viable integration of data-driven applications into our environments while simultaneously bringing them under the Rule of Law. This should also provide the best means to achieve effective protection against overdetermination of individuals by machine inferences.
Keywords: Privacy, Incomputability, Machine Learning, Capture, Interpretability, Agonism, Democratic Theory, Mouffe, Technology Assessment, Arie Rip, GDPR, Adversarial Design, methodological integrity, purpose binding, automated decisions
Suggested Citation: Suggested Citation