Comments on the Toronto Police Services Board Proposed Policy on AI Technologies
13 Pages Posted: 28 Feb 2024
Date Written: January 31, 2024
Abstract
We welcome the Toronto Police Service Board’s (TPSB) proposed policy to introduce guidance for the Toronto Chief of Police on the use of artificial intelligence (AI). The proposed policy and consultation is long overdue. Before addressing our specific recommendations focused on evaluation, explainability and interpretability, and use, procurement and development, we wish to raise four substantive points about the proposed policy.
1. Any implementation of AI technologies by law enforcement needs to begin from the assumption that it cannot reliably anticipate all the effects of those technologies on policing or policed communities and act accordingly in light of these impacts. Examples of algorithmic bias and other problematic behaviour surface in the press weekly. Computer scientists continue to refine modeling and algorithmic techniques every year, both raising new problematic behaviour in existing algorithms and proposing new approaches. Since the technology is constantly changing, predictions of how its deployment affects different populations must also constantly change.
2. Toronto Police Services (TPS) policy should approach AI with caution, reflecting a greater rigour and transparency than at present, and with greater rigour and transparency than currently proposed. AI has been in use by the TPS since at least 2016; yet, the proposed policy does not include evidence about existing practices or potential harms. The proposed policy includes a problematic spectrum of risk categories such as “Extreme Risk” categories (clearly referencing Clearview AI) that are illegal for police use in Canada. Proposals such as the Risk Evaluation Committee remain too underdeveloped in the proposed policy to allow effective feedback.
3. The TPS must prioritize consideration of and prevent any potential infringements of the Canadian Charter of Rights and Freedoms before they occur. We believe there to be sufficient national research on these risks (including Robertson, Khoo and Song, Robertson & Khoo, Yuan Stevens, as well as Tamir Israel) to justify this recommendation. Racial bias and discrimination on the basis of race as well as other protected grounds are constitutive features of biometrics; these biases cannot be programmed out. Predictive policing, meanwhile, problematically maps statistical regularities in groups onto individuals. Biases in AI pose distinct risks for Toronto, one of Canada’s most multicultural cities with known anti-Black bias in its policing. The use of AI can exacerbate existing biases; for example, it is well-known that racialized, Indigenous and Black peoples are more likely to suffer violence at the hands of the police in Canada (and Toronto) and that systemic racism is pervasive in policing in Canada more broadly.
4. The Extreme Risk category is actionable immediately. We know the human rights risks related to predictive policing tools such as biometric recognition technologies are so great that the proposed policy must prohibit the use of these “Extreme Risk” technologies, which can be used for intrusive mass surveillance particularly when used in live settings. The long timeline to phase out Extreme Risk technologies (2024) is unacceptable and allows for the continued use of presumably illegal technologies in the Toronto Police Services. An immediate review should be undertaken by the Police Chief to identify Extreme Risk AI technologies in operation. Any Extreme Risk Technologies should be immediately retired.
Suggested Citation: Suggested Citation