The Case for a Broader Approach to AI Assurance: Addressing 'Hidden' Harms in the Development of Artificial Intelligence

22 Pages Posted: 8 Jan 2024

See all articles by Chris Thomas

Chris Thomas

The Alan Turing Institute

Huw Roberts

University of Oxford - Oxford Internet Institute

Jakob Mökander

University of Oxford - Oxford Internet Institute; Princeton University - Center for Information Technology Policy

Andreas Tsamados

University of Oxford, Oxford Internet Institute

Mariarosaria Taddeo

University of Oxford - Oxford Internet Institute

Luciano Floridi

Yale University - Digital Ethics Center; University of Bologna- Department of Legal Studies

Date Written: December 11, 2023

Abstract

Artificial intelligence (AI) assurance is an umbrella term describing many approaches – such as impact assessment, audit, and certification procedures – used to provide evidence that an AI system is legal, ethical, and technically robust. AI assurance approaches largely focus on two overlapping categories of harms: deployment harms that emerge at, or after, the point of use, and individual harms that directly impact a person as an individual. Current approaches generally overlook upstream collective and societal harms associated with the development of systems, such as resource extraction and processing, exploitative labour practices and energy intensive model training. Thus, the scope of current AI assurance practice is insufficient for ensuring that AI is ethical in a holistic sense, i.e., in ways that are legally permissible, socially acceptable, economically viable and environmentally sustainable. This article addresses this shortcoming by arguing for a broader approach to AI assurance that is sensitive to the full scope of AI development and deployment harms. To do so, the article maps harms related to AI and highlights three examples of harmful practices that occur upstream in the AI supply chain and impact the environment, labour, and data exploitation. It then reviews assurance mechanisms used in adjacent industries to mitigate similar harms, evaluating their strengths, weaknesses, and how effectively they are being applied to AI. Finally, it provides recommendations as to how a broader approach to AI assurance can be implemented to mitigate harms more effectively across the whole AI supply chain.

Keywords: Artificial Intelligence, Audit, Assurance, Certification, Compliance, Impact Assessment, ESG, sustainability, governance

Suggested Citation

Thomas, Chris and Roberts, Huw and Mökander, Jakob and Tsamados, Andreas and Taddeo, Mariarosaria and Floridi, Luciano, The Case for a Broader Approach to AI Assurance: Addressing 'Hidden' Harms in the Development of Artificial Intelligence (December 11, 2023). Available at SSRN: https://ssrn.com/abstract=4660737 or http://dx.doi.org/10.2139/ssrn.4660737

Chris Thomas (Contact Author)

The Alan Turing Institute ( email )

British Library
96 Euston Road
London, NW1 2DB
United Kingdom

Huw Roberts

University of Oxford - Oxford Internet Institute ( email )

1 St. Giles
University of Oxford
Oxford OX1 3PG Oxfordshire, Oxfordshire OX1 3JS
United Kingdom

HOME PAGE: http://https://digitalethicslab.oii.ox.ac.uk/huw-roberts/

Jakob Mökander

University of Oxford - Oxford Internet Institute ( email )

Princeton University - Center for Information Technology Policy ( email )

C231A E-Quad
Olden Street
Princeton, NJ 08540
United States

Andreas Tsamados

University of Oxford, Oxford Internet Institute ( email )

1 St. Giles
University of Oxford
Oxford OX1 3PG Oxfordshire, Oxfordshire OX1 3JS
United Kingdom

Mariarosaria Taddeo

University of Oxford - Oxford Internet Institute ( email )

1 St. Giles
University of Oxford
Oxford OX1 3PG Oxfordshire, Oxfordshire OX1 3JS
United Kingdom

Luciano Floridi

Yale University - Digital Ethics Center ( email )

85 Trumbull Street
New Haven, CT CT 06511
United States
2034326473 (Phone)

University of Bologna- Department of Legal Studies ( email )

Via Zamboni 22
Bologna, Bo 40100
Italy

HOME PAGE: http://www.unibo.it/sitoweb/luciano.floridi/en

Do you have negative results from your research you’d like to share?

Paper statistics

Downloads
405
Abstract Views
1,545
Rank
134,442
PlumX Metrics