Diachronic Interpretability & Machine Learning Systems
Journal of Cross-disciplinary Research in Computational Law, January 2022
34 Pages Posted: 23 Nov 2020 Last revised: 19 Mar 2022
Date Written: December 22, 2020
Abstract
If a system is interpretable today, why would it not be as interpretable in five or ten years time? Once one takes into account the fact that interpretability requires both an interpretable object and a subject capable of interpretation, one may distinguish between two types of factors that will negatively impact the interpretability of some ML systems over time.
On the ‘interpretable object’ front, the vast literature on ML interpretability has largely been motivated by a concern to preserve the possibility of ascertaining whether the accuracy of some ML model holds beyond the training data. The variety of transparency and explainability strategies that have been put forwardcan make us blind to the fact that what an ML system has learned may produce mostly accurate insights when deployed in real-life contexts this year yet become useless faced with next year’s socially transformed cohort. This is a known problem in computer science, yet its ethical and legal implications have yet to be vigorously debated.
On the ‘subject capable of interpretation front’,, ethically and legally significant practices presuppose the continuous, uncertain (re)articulation of conflicting values. Without our continued drive to call for better ways of doing things, these discursive practices would wither away. To retain such a collective ability calls for a change in the way we articulate interpretability requirements for systems deployed in ethically and legally significant contexts: we need to build systems whose outputs we are capable of contesting today, as well as in five years’ time. This calls for what I call ‘ensemble contestability’ features.
Keywords: Interpretability, Explainability, Machine Learning, Normative Agency, Agency, Ethical Expertise
Suggested Citation: Suggested Citation