Artificial Intelligence, Predictive Policing, and Risk Assessment for Law Enforcement

Posted: 3 Feb 2021

Date Written: January 2021


There are widespread concerns about the use of artificial intelligence in law enforcement. Predictive policing and risk assessment are salient examples. Worries include the accuracy of forecasts that guide both activities, the prospect of bias, and an apparent lack of operational transparency. Nearly breathless media coverage of artificial intelligence helps shape the narrative. In this review, we address these issues by first unpacking depictions of artificial intelligence. Its use in predictive policing to forecast crimes in time and space is largely an exercise in spatial statistics that in principle can make policing more effective and more surgical. Its use in criminal justice risk assessment to forecast who will commit crimes is largely an exercise in adaptive, nonparametric regression. It can in principle allow law enforcement agencies to better provide for public safety with the least restrictive means necessary, which can mean far less use of incarceration. None of this is mysterious. Nevertheless, concerns about accuracy, fairness, and transparency are real, and there are tradeoffs between them for which there can be no technical fix. You can't have it all. Solutions will be found through political and legislative processes achieving an acceptable balance between competing priorities.

Suggested Citation

Berk, Richard, Artificial Intelligence, Predictive Policing, and Risk Assessment for Law Enforcement (January 2021). Annual Review of Criminology, Vol. 4, pp. 209-237, 2021, Vol. 4, pp. 209-237, Available at SSRN: or

Richard Berk (Contact Author)

University of Pennsylvania ( email )

Philadelphia, PA 19104
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Abstract Views
PlumX Metrics