Adjudicating with Inscrutable Decision Tools
in MACHINES WE TRUST: GETTING ALONG WITH ARTIFICIAL INTELLIGENCE, Marcello Pelillo and Teresa Scantamburlo (Eds.) (MIT Press, 2020 Forthcoming)
29 Pages Posted: 24 Jul 2020 Last revised: 6 Nov 2020
Date Written: December 20, 2019
Machine learning models are increasingly used in making decisions important to peoples’ lives. Often, however, humans have difficulty understanding how these automated decision tools arrive at their assessment. This inscrutability has drawn the attention of data scientists, legal scholars and others, but the discussion so far has focused on explanations aimed at decision subjects. This chapter highlights the previously neglected importance of explanations to human adjudicators, who generally retain ultimate responsibility for significant decisions. It approaches this issue by comparing inscrutable automated decision tools to the rule-like decision criteria that adjudicators have traditionally implemented, often in combination with more standard-like criteria. The chapter analyzes the novel difficulties that inscrutable automated decision tools create for adjudicators and concludes with some suggestions about how to address them.
Keywords: machine learning, algorithmic decisionmaking, adjudication, explainability, interpretability
Suggested Citation: Suggested Citation