Human Rights and Algorithmic Impact Assessment for Predictive Policing
Constitutional Challenges in the Algorithmic Society, Cambridge University Press, 2021, Forthcoming
16 Pages Posted: 22 Jul 2021
Date Written: July 20, 2021
Law enforcement agencies are increasingly using algorithmic predictive policing systems to forecast criminal activity and allocate police resources. For instance, New York, Chicago, and Los Angeles use predictive policing systems built by private actors, such as PredPol, Palantir and Hunchlab, to assess crime risk and forecast its occurrence, in hope of mitigating it. More often, such systems predict the places where crimes are most likely to happen in a given time window (place-based) based on input data, such as location and timing of previously reported crimes. Other systems analyze who will be involved in a crime as either victim or perpetrator (person-based). Predictions can focus on variables such as places, people, groups or incidents. The goal is also to better deploy officers in a time of declining budgets and staffing. Such tools are mainly used in the US, but European police forces have expressed an interest in using them to protect the largest cities. Predictive policing systems and pilot projects have already been deployed , such as PredPol, used by the Kent Police in the UK.
However, these predictive systems challenge fundamental rights and guarantees of the criminal procedure (part. 2). I will address these issues by taking into account the enactment of ethical norms to reinforce constitutional rights (part. 3), as well as the use of a practical tool, namely Algorithmic Impact Assessment, to mitigate the risks of such systems (part. 4).
Keywords: policing, algorithmic decision-making, biases, discrimination
Suggested Citation: Suggested Citation