Contesting Automated Decisions
European Data Protection Law Review 4 (2018), 433-446
14 Pages Posted: 6 Jan 2019 Last revised: 24 Jun 2019
Date Written: December 21, 2018
Abstract
The paper intends to identify the essentials of a ‘transparency model’ which aims to analyse automated decision-making systems not by the mechanisms of their operation but rather by the normativity embedded in their behaviour/action. First, transparency-related concerns and challenges inherent in ML are conceptualised as “informational asymmetries”. Under a threefold approach, this part explains and taxonomises how i) intransparencies and opacities, ii) epistemological flaws (spurious or weak causation), and iii) biased processes inherent in machine learning (ML) create cognitive obstacles on the side of the data subject in terms of contesting automated decisions. Concluding that the transparency needs of an effective contestation scheme go much beyond the disclosure of algorithms or other computational elements, the following part explains the essentials of a rule-based ‘transparency model’ as: i) the data as ‘decisional cues/input’; ii) the normativities contained both at the inference and decisional (rule-making) level; iii) the context and further implications of the decision; iv) the accountable actors. This is followed by the identification of certain impediments at the technical, economical and legal level regarding the implementation of the model. Finally, the paper provides theoretical guidance as the preliminaries of a ‘contestability scheme’ which aims for compliance with transparency obligations such as those provided under the EU data protection regime (the GDPR).
Keywords: algorithmic transparency, automated decisions, GDPR Article 22, explainable AI, techno-regulation
Suggested Citation: Suggested Citation