An Institutional View Of Algorithmic Impact Assessments

35 Harvard Journal of Law & Technology 117 (2021)

UCLA School of Law, Public Law Research Paper No. 21-25

75 Pages Posted: 24 Jun 2021 Last revised: 20 Jan 2022

Date Written: June 15, 2021

Abstract

Scholars and advocates have proposed algorithmic impact assessments (“AIAs”) as a regulatory strategy for addressing and correcting algorithmic harms. An AIA-based regulatory framework would require the creator of an algorithmic system to assess its potential socially harmful impacts before implementation and create documentation that can be used later for accountability and future policy development. In practice, an impact assessment framework relies on the expertise and information to which only the creators of the project have access. It is therefore inevitable that technology firms will have an amount of practical discretion in the assessment, and willing cooperation from firms is necessary to make the regulation work. But a regime that relies on good-faith partnership from the private sector also has strong potential to be undermined by the incentives and institutional logics of the private sector. This Article argues that for AIA regulation to be effective, it must anticipate the ways that such regulation will be filtered through the private sector institutional environment.

This Article combines insights from governance, organizational theory, and computer science to explore how future AIA regulations may be implemented on the ground. An AIA regulation has two main goals: (1) to require firms to consider social impacts early and work to mitigate them before development, and (2) to create documentation of decisions and testing that can support future policy-learning. The Article argues that institutional logics, such as liability avoidance and the profit motive, will render the first goal difficult to fully achieve in the short term because the practical discretion that firms have allows them room to undermine the AIA requirements. But AIAs can still be beneficial because the second goal does not require full compliance to be successful. Over time, there is also reason to believe that AIAs can be part of a broader cultural shift toward accountability within the technical industry. This will lead to greater buy-in and less need for enforcement of documentation requirements.

Given the degree to which an AIA regulation will rely on good-faith participation by regulated firms, AIAs must have synergy with how the field works rather than be in tension with it. For this reason, the Article argues that it is also crucial that regulators understand the technical industry itself, including the technology, the organizational culture, and emerging documentation standards. This Article demonstrates how emerging research within the field of algorithmic accountability can also inform the shape of AIA regulation. By looking at the different stages of development and so-called “pause points,” regulators can know at which points firms can export information. Looking at AI ethics research can show what social impacts the field thinks are important and where it might miss issues that policymakers care about. Overall, understanding the industry can make the AIA documentation requirements themselves more legible to technology firms, easing the path for a future AIA mandate to be successful on the ground.

Keywords: impact assessments, algorithmic impact assessments, law and technology, algorithms, legislation

Suggested Citation

Selbst, Andrew D., An Institutional View Of Algorithmic Impact Assessments (June 15, 2021). 35 Harvard Journal of Law & Technology 117 (2021), UCLA School of Law, Public Law Research Paper No. 21-25, Available at SSRN: https://ssrn.com/abstract=3867634

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
1,568
Abstract Views
9,762
Rank
25,525
PlumX Metrics