The Problems of the Automation Bias in the Public Sector – A Legal Perspective

Weizenbaum Conference proceedings 2023, Forthcoming

8 Pages Posted: 2 Aug 2023

Date Written: May 8, 2023

Abstract

The automation bias describes the phenomenon, proven in behavioural psychology, that people place excessive trust in the decision suggestions of machines. The law currently sees a dichotomy—and covers only fully automated decisions, and not those involving human decision makers at any stage of the process. However, the widespread use of such systems, for example to inform decisions in education or benefits administration, creates a leverage effect and increases the number of people affected. Particularly in environments where people routinely have to make a large number of similar decisions, the risk of automation bias increases. As an example, automated decisions providing suggestions for job placements illustrates the particular challenges of decision support systems in the public sector. So far, the risks have not been sufficiently addressed in legislation, as the analysis of the GDPR and the draft Artificial Intelligence Act show. I argue for the need for regulation and present initial approaches.

Keywords: AI-Bias, ADM-decisions, GDPR, discrimination, AI-ACT, Public Employement Services

Suggested Citation

Ruschemeier, Hannah, The Problems of the Automation Bias in the Public Sector – A Legal Perspective (May 8, 2023). Weizenbaum Conference proceedings 2023, Forthcoming, Available at SSRN: https://ssrn.com/abstract=4521474

Hannah Ruschemeier (Contact Author)

FernUniversität in Hagen ( email )

Universitätsstrasse
Hagen, 58084
Germany

HOME PAGE: http://www.fernuni-hagen.de/prof-ruschemeier/en/

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
218
Abstract Views
702
Rank
298,606
PlumX Metrics