Understanding and Explaining Automated Decisions

28 Pages Posted: 20 Jan 2019

See all articles by Alison Powell

Alison Powell

London School of Economics & Political Science; University of Oxford - Oxford Internet Institute

Arnav Joshi

London School of Economics & Political Science (LSE)

Paul-Marie Carfantan

London School of Economics & Political Science (LSE); HEC Paris

Georgina Bourke

affiliation not provided to SSRN

Ian Hutchinson

affiliation not provided to SSRN

Annalisa Eichholzer

affiliation not provided to SSRN

Date Written: January 3, 2019

Abstract

Automated decisions including straightforward algorithmic calculations and more complex results of machine learning and other AI techniques can be opaque. This, combined with the risk of bias and injustice resulting from decisions taken at the design stage, and in relation to the data used to train automated systems, means that automated systems can reproduce or intensify inequalities already existing in society. The Understanding Automated Decisions project conducted through a partnership between LSE Media and Communication researchers and designers at technology studio Projects By IF, explored ways to explain automated decision function ex ante and ex post by designing interfaces. We also explored ways to solicit public reflection on the governance and regulation of automated systems.

Key Findings: There are ways to present how data are used to make decisions through simple interface design; Ex ante and ex post approaches to explanation each target different areas of system design and function, and interface prototypes can help to illustrate when and under what circumstances these may be helpful to people using or subject to these systems; In some cases these interface-based explanations can’t address the elements of the system that may actually have the greatest impact on people; Stressing transparency as the main underpinning for an explainable decision may not address the most significant impacts of a system design; Regulators may wish to specify different forms of explainability, but should also acknowledge that explanation, as a means of achieving or building on transparency may not achieve all its ends.

Keywords: automated decisions, algorithms, explanation, machine learning, AI, transparency, accountability

Suggested Citation

Powell, Alison and Joshi, Arnav and Carfantan, Paul-Marie and Bourke, Georgina and Hutchinson, Ian and Eichholzer, Annalisa, Understanding and Explaining Automated Decisions (January 3, 2019). Available at SSRN: https://ssrn.com/abstract=3309779 or http://dx.doi.org/10.2139/ssrn.3309779

Alison Powell (Contact Author)

London School of Economics & Political Science ( email )

London WC2A 2AE
United Kingdom

University of Oxford - Oxford Internet Institute ( email )

1 St. Giles
University of Oxford
Oxford OX1 3PG Oxfordshire, Oxfordshire OX1 3JS
United Kingdom

Arnav Joshi

London School of Economics & Political Science (LSE) ( email )

Houghton Street
London WC2A 2AE
United Kingdom

Paul-Marie Carfantan

London School of Economics & Political Science (LSE) ( email )

Houghton Street
London, WC2A 2AE
United Kingdom

HEC Paris ( email )

1 rue de la Liberation
Jouy-en-Josas Cedex, 78351
France

Georgina Bourke

affiliation not provided to SSRN

Ian Hutchinson

affiliation not provided to SSRN

Annalisa Eichholzer

affiliation not provided to SSRN

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
333
Abstract Views
1,441
Rank
174,558
PlumX Metrics