Governing Algorithmic Systems with Impact Assessments: Six Observations

AAAI / ACM Conference on Artificial Intelligence, Ethics, and Society (AIES), 2021

12 Pages Posted: 18 May 2021

See all articles by Elizabeth Anne Watkins

Elizabeth Anne Watkins

Princeton University Center for Information Technology Policy; Data & Society Research Institute

Emanuel Moss

Intel Labs

Jacob Metcalf

Data & Society Research Institute

Ranjit Singh

Data & Society Research Institute

Madeleine Clare Elish

Google Inc.; University of Oxford - Oxford Internet Institute

Date Written: May 14, 2021

Abstract

Algorithmic decision-making and decision-support systems (ADS) are gaining influence over how society distributes resources, administers justice, and provides access to opportunities. Yet collectively we do not adequately study how these systems affect people or document the actual or potential harms resulting from their integration with important social functions. This is a significant challenge for computational justice efforts of measuring and governing AI systems. Impact assessments are often used as instruments to create accountability relationships and grant some measure of agency and voice to communities affected by projects with environmental, financial, and human rights ramifications. Applying these tools — through Algorithmic Impact Assessments (AIA) — is a plausible way to establish accountability relationships for ADSs. At the same time, what an AIA would entail remains under-specified; they raise as many questions as they answer. Choices about the methods, scope, and purpose of AIAs structure the conditions of possibility for AI governance. In this paper, we present our research on the history of impact assessments across diverse domains, through a sociotechnical lens, to present six observations on how they co-constitute accountability. Decisions about what type of effects count as an impact; when impacts are assessed; whose interests are considered; who is invited to participate; who conducts the assessment; how assessments are made publicly available, and what the outputs of the assessment might be; all shape the forms of accountability that AIAs engender. Because AlAs are still an incipient governance strategy, approaching them as social constructions that do not require a single or universal approach offers a chance to produce interventions that emerge from careful deliberation.

Keywords: algorithmic impact assessment, impact, harm, accountability, governance

Suggested Citation

Watkins, Elizabeth and Watkins, Elizabeth and Moss, Emanuel and Metcalf, Jacob and Singh, Ranjit and Elish, Madeleine Clare, Governing Algorithmic Systems with Impact Assessments: Six Observations (May 14, 2021). AAAI / ACM Conference on Artificial Intelligence, Ethics, and Society (AIES), 2021, Available at SSRN: https://ssrn.com/abstract=3846300

Elizabeth Watkins (Contact Author)

Data & Society Research Institute ( email )

36 West 20th Street
11th Floor
New York,, NY 10011
United States

Princeton University Center for Information Technology Policy ( email )

C231A E-Quad
Olden Street
Princeton, NJ 08540
United States

HOME PAGE: http://https://citp.princeton.edu/citp-people/watkins/

Emanuel Moss

Intel Labs ( email )

2200 Mission College Blvd.
Santa Clara, CA 95054-1549
United States

Jacob Metcalf

Data & Society Research Institute ( email )

36 West 20th Street
11th Floor
New York,, NY 10011
United States

Ranjit Singh

Data & Society Research Institute ( email )

36 West 20th Street
11th Floor
New York,, NY 10011
United States

Madeleine Clare Elish

Google Inc. ( email )

1600 Amphitheatre Parkway
Second Floor
Mountain View, CA 94043
United States

University of Oxford - Oxford Internet Institute ( email )

1 St. Giles
University of Oxford
Oxford OX1 3PG Oxfordshire, Oxfordshire OX1 3JS
United Kingdom

Do you have negative results from your research you’d like to share?

Paper statistics

Downloads
171
Abstract Views
898
Rank
312,164
PlumX Metrics