Enslaving the Algorithm: From a ‘Right to an Explanation’ to a ‘Right to Better Decisions’?
IEEE Security & Privacy (2018) 16(3), pp. 46-54, DOI: 10.1109/MSP.2018.2701152
14 Pages Posted: 16 Oct 2017 Last revised: 15 Jul 2018
Date Written: 2018
As concerns about unfairness and discrimination in “black box” machine learning systems rise, a legal “right to an explanation” has emerged as a compellingly attractive approach for challenge and redress. We outline recent debates on the limited provisions in European data protection law, and introduce and analyse newer explanation rights in French administrative law and the draft modernised Council of Europe Convention 108. While individual rights can be useful, in privacy law they have historically unreasonably burdened the average data subject. “Meaningful information” about algorithmic logics is more technically possible than commonly thought, but this exacerbates a new “transparency fallacy” — an illusion of remedy rather than anything substantively helpful. While rights-based approaches deserve a firm place in the toolbox, other forms of governance, such as impact assessments, “soft law”, judicial review and model repositories deserve more attention, alongside catalysing agencies acting for users to control algorithmic system design.
Keywords: GDPR, Data Protection, Right to an Explanation, Right to Explanation, Algorithmic Accountability, Automated Decision-Making, Automated Decisions, General Data Protection Regulation, Council of Europe, Convention 108, CoE 108, Transparency, Judicial Review, Black Boxes, Machine Learning
Suggested Citation: Suggested Citation