From black box to glass box: algorithmic explainability as a strategic decision
32 Pages Posted: 10 Nov 2021 Last revised: 30 Nov 2022
Date Written: September 21, 2022
Abstract
The best-performing and most popular algorithms are often the least explainable. In parallel, there is growing concern and evidence that algorithms may engage, autonomously, in welfare-damaging strategies. Inspired by recent regulatory proposals, we model a firm's compliance strategy under the threat of (costly and imperfect) regulatory audits. Firms may invest in algorithmic ``explainability'' to better understand their own algorithms and reduce their cost of compliance.
We find that, when audit efficacy is not affected by explainability, audits always induce investment in explainability. Mandatory disclosure of the explainability level makes the auditing policy even more effective, because it allows firms to signal compliance.
If, instead, explainability makes audits more effective a firm may attempt to hide a potential misconduct behind algorithmic opacity, a phenomenon exacerbated by opportunistic auditing policies. In these cases, audits may stimulate the proliferation of black box algorithms and minimum explainability standards may need to be envisaged.
Keywords: Explainability, Algorithmic decision-making, Self-regulation, Audits, Output regulation.
JEL Classification: D21, D83, K24, K13, K42
Suggested Citation: Suggested Citation