3rd Review of the Treasury Board Directive on Automated Decision-Making
24 Pages Posted: 29 Apr 2022
Date Written: April 22, 2022
The 3rd review of the Treasury Board Directive on Automated Decision-Making takes stock of the current state of the policy instrument and identifies several risks and challenges to the federal government’s commitment to responsible artificial intelligence (AI). It discusses critical gaps that limit the Directive’s relevance and effectiveness in supporting transparency, accountability, and fairness in automated decision-making.
A key problem this review seeks to address is the Directive’s scope, which excludes internal services and therefore automated decisions impacting federal employees. It also points to the need for a more holistic approach to governing data associated with AI development and use. As well, while the Directive seeks to mitigate data bias, the review notes that it does not account for bias arising from the model underlying a system.
The review determines that the Directive’s approach to explanation is lacking given the lack of clarity as to what constitutes a “meaningful explanation”. It also highlights the need to strengthen peer review transparency and effectiveness, as the current peer review requirement does not mandate the release of information about completed reviews or specify when they should be conducted. Similarly, the timing of the mandated release of the Algorithmic Impact Assessment (AIA) is found to be unclear in the Directive and mostly reliant on ad-hoc guidance.
In addition to identifying gaps in the instrument, the review raises issues with the terminology used in existing provisions. This is focused on the need for encompassing language to describe the subjects of automated decision-making and the importance of fostering policy coherence by aligning terms across Treasury Board policy instruments.
The rapid pace of digital transformation in the federal government has made questions around the purpose and scope of automation increasingly relevant. The review finds that the Directive does not properly consider justifications for automation or evaluate whether a system will be used within boundaries determined based on user needs and program objectives.
The review also considers broader issues related to the governance of the Directive itself. It provides an analysis showing how the current periodic review mechanism poses policy and operational challenges to the federal government.
The review concludes with policy recommendations and provisional amendments addressing these issues.
Keywords: AI policy, policy review, AI governance, responsible AI, GCDigital
Suggested Citation: Suggested Citation