Predictive Policing and Bias in a Nutshell: Technical and Practical Aspects of Personal Data Processing for Law Enforcement Purposes

Digital Criminal Justice: A Studybook Selected Topics for Learners and Researchers. Krisztina Karsai, Adem Sözüer, Liane Wörner (eds.). Istanbul, 2022

16 Pages Posted: 6 Oct 2022

Date Written: June 5, 2022

Abstract

Law enforcement bodies have been historically implementing different methods to foresee certain crimes and to place possible perpetrators in certain crimes. Through Big Data and developments in Artificial Intelligence techniques, predictive policing methods are progressively supported by predictive policing algorithms. Predictive policing algorithms are trained by large sets of human-generated personal data that accompanied by features of the sources from which it was collected. As the bias problem is specific to human behaviors and statements, algorithms created based on such sources carry biased properties. These problems then cause statistical errors to emerge in the algorithms, which in turn approximate biased outputs to be generated by predictive policing algorithms, too. Any type of bias is contrary to known human rights legislation, and the topic falls well under the European Union’s General Data Protection Regulation (GDPR), Law Enforcement Directive, and the proposed Artificial Intelligence Act (AI Act). This article analyzes the sources of bias in predictive policing algorithms, and discusses the applicability of the referred legislation in a comprehensive manner. Even though national and other legislation regulates how law enforcement bodies process and hold personal data on individuals, certain GDPR rules, such as transparency and explainability are still applicable as general rules. However, explainability of the outputs generated by predictive policing algorithms might not only be a traditional black box issue, but also a result of administrative and practical processes. Whether the proposed AI Act would bring a new solution for the problem remains a topic of discussion, whilst classifying predictive policing algorithms under the unacceptable risk category would be the most feasible solution until those problems are appropriately addressed.

Keywords: predictive policing, algorithm, artificial intelligence, gdpr, data protection, bias

Suggested Citation

Gültekin-Várkonyi, Dr. Gizem, Predictive Policing and Bias in a Nutshell: Technical and Practical Aspects of Personal Data Processing for Law Enforcement Purposes (June 5, 2022). Digital Criminal Justice: A Studybook Selected Topics for Learners and Researchers. Krisztina Karsai, Adem Sözüer, Liane Wörner (eds.). Istanbul, 2022, Available at SSRN: https://ssrn.com/abstract=4238774 or http://dx.doi.org/10.2139/ssrn.4238774

Dr. Gizem Gültekin-Várkonyi (Contact Author)

University of Szeged - Faculty of Law ( email )

Hungary

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
123
Abstract Views
556
Rank
375,768
PlumX Metrics