Comments on the Toronto Police Services Board Proposed Policy on AI Technologies

13 Pages Posted: 28 Feb 2024

See all articles by Ana Brandusescu

Ana Brandusescu

McGill University

Alan Chan

University of Montreal

Fernando Diaz

Mila

Andrés Ferraro

McGill University

Alex Ketchum

McGill University

Fenwick McKelvey

Concordia University, Quebec

Jimin Rhim

McGill University

Shalaleh Rismani

McGill University

Renee Sieber

McGill University

Jonathan Sterne

McGill University

Yuan Y. Stevens

McGill University - Centre of Genomics and Policy; Data & Society Research Institute; Independent Research Consultant and Advisor

Date Written: January 31, 2024

Abstract

We welcome the Toronto Police Service Board’s (TPSB) proposed policy to introduce guidance for the Toronto Chief of Police on the use of artificial intelligence (AI). The proposed policy and consultation is long overdue. Before addressing our specific recommendations focused on evaluation, explainability and interpretability, and use, procurement and development, we wish to raise four substantive points about the proposed policy.

1. Any implementation of AI technologies by law enforcement needs to begin from the assumption that it cannot reliably anticipate all the effects of those technologies on policing or policed communities and act accordingly in light of these impacts. Examples of algorithmic bias and other problematic behaviour surface in the press weekly. Computer scientists continue to refine modeling and algorithmic techniques every year, both raising new problematic behaviour in existing algorithms and proposing new approaches. Since the technology is constantly changing, predictions of how its deployment affects different populations must also constantly change.

2. Toronto Police Services (TPS) policy should approach AI with caution, reflecting a greater rigour and transparency than at present, and with greater rigour and transparency than currently proposed. AI has been in use by the TPS since at least 2016; yet, the proposed policy does not include evidence about existing practices or potential harms. The proposed policy includes a problematic spectrum of risk categories such as “Extreme Risk” categories (clearly referencing Clearview AI) that are illegal for police use in Canada. Proposals such as the Risk Evaluation Committee remain too underdeveloped in the proposed policy to allow effective feedback.

3. The TPS must prioritize consideration of and prevent any potential infringements of the Canadian Charter of Rights and Freedoms before they occur. We believe there to be sufficient national research on these risks (including Robertson, Khoo and Song, Robertson & Khoo, Yuan Stevens, as well as Tamir Israel) to justify this recommendation. Racial bias and discrimination on the basis of race as well as other protected grounds are constitutive features of biometrics; these biases cannot be programmed out. Predictive policing, meanwhile, problematically maps statistical regularities in groups onto individuals. Biases in AI pose distinct risks for Toronto, one of Canada’s most multicultural cities with known anti-Black bias in its policing. The use of AI can exacerbate existing biases; for example, it is well-known that racialized, Indigenous and Black peoples are more likely to suffer violence at the hands of the police in Canada (and Toronto) and that systemic racism is pervasive in policing in Canada more broadly.

4. The Extreme Risk category is actionable immediately. We know the human rights risks related to predictive policing tools such as biometric recognition technologies are so great that the proposed policy must prohibit the use of these “Extreme Risk” technologies, which can be used for intrusive mass surveillance particularly when used in live settings. The long timeline to phase out Extreme Risk technologies (2024) is unacceptable and allows for the continued use of presumably illegal technologies in the Toronto Police Services. An immediate review should be undertaken by the Police Chief to identify Extreme Risk AI technologies in operation. Any Extreme Risk Technologies should be immediately retired.

Suggested Citation

Brandusescu, Ana and Chan, Alan and Diaz, Fernando and Ferraro, Andrés and Ketchum, Alex and McKelvey, Fenwick and Rhim, Jimin and Rismani, Shalaleh and Sieber, Renee and Sterne, Jonathan and Stevens, Yuan Y., Comments on the Toronto Police Services Board Proposed Policy on AI Technologies (January 31, 2024). Available at SSRN: https://ssrn.com/abstract=4712238 or http://dx.doi.org/10.2139/ssrn.4712238

Ana Brandusescu

McGill University ( email )

Montréal, Quebec
Canada

Alan Chan

University of Montreal

Canada

Andrés Ferraro

McGill University

1001 Sherbrooke St. W
Montreal, Quebec H3A 1G5
Canada

Alex Ketchum

McGill University

1001 Sherbrooke St. W
Montreal, Quebec H3A 1G5
Canada

Fenwick McKelvey

Concordia University, Quebec ( email )

1455 de Maisonneuve Blvd. W.
Montreal, Quebec H3G 1MB
Canada

Jimin Rhim

McGill University

1001 Sherbrooke St. W
Montreal, Quebec H3A 1G5
Canada

Shalaleh Rismani

McGill University

Montréal, Quebec
Canada

Renee Sieber

McGill University ( email )

1001 Sherbrooke St. W
Montreal, Quebec H3A 1G5
Canada

Jonathan Sterne

McGill University

1001 Sherbrooke St. W
Montreal, Quebec H3A 1G5
Canada

Yuan Y. Stevens (Contact Author)

McGill University - Centre of Genomics and Policy ( email )

740 Ave Dr Penfield
Montreal, Quebec H3A0G1
Canada

HOME PAGE: http://www.genomicsandpolicy.org

Data & Society Research Institute ( email )

36 West 20th Street
11th Floor
New York, NY 10011
United States

HOME PAGE: http://datasociety.net

Independent Research Consultant and Advisor ( email )

HOME PAGE: http://yuanstevens.org

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
19
Abstract Views
147
PlumX Metrics