Diachronic Interpretability & Machine Learning Systems

Journal of Cross-disciplinary Research in Computational Law, January 2022

34 Pages Posted: 23 Nov 2020 Last revised: 19 Mar 2022

See all articles by Sylvie Delacroix

Sylvie Delacroix

University of Birmingham - Birmingham Law School; The Alan Turing Institute

Date Written: December 22, 2020

Abstract

If a system is interpretable today, why would it not be as interpretable in five or ten years time? Once one takes into account the fact that interpretability requires both an interpretable object and a subject capable of interpretation, one may distinguish between two types of factors that will negatively impact the interpretability of some ML systems over time.

On the ‘interpretable object’ front, the vast literature on ML interpretability has largely been motivated by a concern to preserve the possibility of ascertaining whether the accuracy of some ML model holds beyond the training data. The variety of transparency and explainability strategies that have been put forwardcan make us blind to the fact that what an ML system has learned may produce mostly accurate insights when deployed in real-life contexts this year yet become useless faced with next year’s socially transformed cohort. This is a known problem in computer science, yet its ethical and legal implications have yet to be vigorously debated.

On the ‘subject capable of interpretation front’,, ethically and legally significant practices presuppose the continuous, uncertain (re)articulation of conflicting values. Without our continued drive to call for better ways of doing things, these discursive practices would wither away. To retain such a collective ability calls for a change in the way we articulate interpretability requirements for systems deployed in ethically and legally significant contexts: we need to build systems whose outputs we are capable of contesting today, as well as in five years’ time. This calls for what I call ‘ensemble contestability’ features.

Keywords: Interpretability, Explainability, Machine Learning, Normative Agency, Agency, Ethical Expertise

Suggested Citation

Delacroix, Sylvie, Diachronic Interpretability & Machine Learning Systems (December 22, 2020). Journal of Cross-disciplinary Research in Computational Law, January 2022 , Available at SSRN: https://ssrn.com/abstract=3728606 or http://dx.doi.org/10.2139/ssrn.3728606

Sylvie Delacroix (Contact Author)

University of Birmingham - Birmingham Law School ( email )

Edgbaston
Birmingham, AL B15 2TT
United Kingdom

HOME PAGE: http://https://www.birmingham.ac.uk/staff/profiles/law/delacroix-sylvie.aspx

The Alan Turing Institute ( email )

96 Euston Road
London, NW1 2DB
United Kingdom

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
75
Abstract Views
580
rank
421,509
PlumX Metrics