Understanding and Explaining Automated Decisions
28 Pages Posted: 20 Jan 2019
Date Written: January 3, 2019
Abstract
Automated decisions including straightforward algorithmic calculations and more complex results of machine learning and other AI techniques can be opaque. This, combined with the risk of bias and injustice resulting from decisions taken at the design stage, and in relation to the data used to train automated systems, means that automated systems can reproduce or intensify inequalities already existing in society. The Understanding Automated Decisions project conducted through a partnership between LSE Media and Communication researchers and designers at technology studio Projects By IF, explored ways to explain automated decision function ex ante and ex post by designing interfaces. We also explored ways to solicit public reflection on the governance and regulation of automated systems.
Key Findings: There are ways to present how data are used to make decisions through simple interface design; Ex ante and ex post approaches to explanation each target different areas of system design and function, and interface prototypes can help to illustrate when and under what circumstances these may be helpful to people using or subject to these systems; In some cases these interface-based explanations can’t address the elements of the system that may actually have the greatest impact on people; Stressing transparency as the main underpinning for an explainable decision may not address the most significant impacts of a system design; Regulators may wish to specify different forms of explainability, but should also acknowledge that explanation, as a means of achieving or building on transparency may not achieve all its ends.
Keywords: automated decisions, algorithms, explanation, machine learning, AI, transparency, accountability
Suggested Citation: Suggested Citation