Communications of the ACM (Volume 55, Number 9), Sept. 2012, p. 33-35
3 Pages Posted: 21 Sep 2012
Date Written: September 1, 2012
Governments are now using automated prediction for a wide range of tasks, from insider trading detection to fighting and preventing violent crimes and terrorist attacks. Automated predictions are considered problematic and even frightening by a large segment of the public. This common visceral response is not always rational and accurate, yet it is backed by several relevant legal concepts. However, automated prediction deserves a closer look, given the fact that it limits the role of human discretion, and thus of hidden biases. Therefore, automated predictions might promote important social objectives, such as equality and fairness.
This short "Viewpoint" addresses the disparity between data mining's hidden benefits, and its negative perception in public opinion. It concludes that automated prediction should have a more significant role in modern government. Therefore, legal impediments blocking some of these practices should be rethought and perhaps removed.
Keywords: Privacy, Data Mining, Prediction, Discrimination, Automation, Hidden Biases
Suggested Citation: Suggested Citation
Zarsky, Tal, Automated Prediction: Perception, Law, and Policy (September 1, 2012). Communications of the ACM (Volume 55, Number 9), Sept. 2012, p. 33-35. Available at SSRN: https://ssrn.com/abstract=2149518