An Institutional View Of Algorithmic Impact Assessments
35 Harvard Journal of Law & Technology (forthcoming)
78 Pages Posted: 24 Jun 2021 Last revised: 25 Jun 2021
Date Written: June 15, 2021
Scholars and advocates have proposed algorithmic impact assessments (AIAs) as a regulatory strategy for addressing and correcting algorithmic harms. An AIA-based regulatory framework would require the creator of an algorithmic system to assess its potential socially harmful impacts before implementation and create documentation that can be later used for accountability and future policy development. In practice, an impact assessment framework relies on the expertise and information that only the creators of the project have access to. It is therefore inevitable that technology firms will have an amount of practical discretion in the assessment, and willing cooperation from firms is necessary to make the regulation work. But a regime that relies on good-faith partnership from the private sector also has strong potential to be undermined by the incentives and institutional logics of the private sector. This Article argues that for AIA regulation to be effective, it must anticipate the ways that such a regulation will be filtered through the private sector institutional environment.
This Article combines insights from governance, organizational theory, and computer science to analyze how future AIA regulations will be implemented on the ground. Institutional logics, such as liability avoidance and the profit motive, will render the first goal—early consideration of social impacts—difficult in the short term. But AIAs can still be beneficial. The second goal—documentation to support future policy learning—does not require full compliance to be successful, and over time, there is reason to believe that AIAs can be part of a broader cultural shift toward accountability within the technical industry. This will lead to greater buy-in and less need for enforcement of documentation requirements.
Given the challenges and reliance on participation, AIAs must have synergy with how the field works rather than be in tension with it. For this reason, the Article argues that it is also crucial that regulators understand the technical industry itself, including the technology, the organizational culture, and emerging documentation standards. This Article demonstrates how emerging research within the field of algorithmic accountability can also inform the shape of AIA regulation. By looking at the different stages of development and so-called “pause points,” regulators can know at which points firms can export information. Looking at AI ethics research can show what social impacts the field thinks are important and where it might miss issues that policymakers care about. Overall, understanding the industry can make the AIA documentation requirements themselves more legible to technology firms, easing the path for a future AIA mandate to be successful on the ground.
Keywords: impact assessments, algorithmic impact assessments, law and technology, algorithms, legislation
Suggested Citation: Suggested Citation