Why Predictive Algorithms are So Risky for Public Sector Bodies

17 Pages Posted: 3 Dec 2020

See all articles by Madeleine Waller

Madeleine Waller

King’s College London

Paul Waller

University of Bradford

Date Written: October 21, 2020


This paper collates multidisciplinary perspectives on the use of predictive analytics in government services. It moves away from the hyped narratives of “AI” or “digital”, and the broad usage of the notion of “ethics”, to focus on highlighting the possible risks of the use of prediction algorithms in public administration. Guidelines for AI use in public bodies are currently available, however there is little evidence these are being followed or that they are being written into new mandatory regulations. The use of algorithms is not just an issue of whether they are fair and safe to use, but whether they abide with the law and whether they actually work. Particularly in public services, there are many things to consider before implementing predictive analytics algorithms, as flawed use in this context can lead to harmful consequences for citizens, individually and collectively, and public sector workers. All stages of the implementation process of algorithms are discussed, from the specification of the problem and model design through to the context of their use and the outcomes. Evidence is drawn from case studies of use in child welfare services, the US Justice System and UK public examination grading in 2020. The paper argues that the risks and drawbacks of such technological approaches need to be more comprehensively understood, and testing done in the operational setting, before implementing them. The paper concludes that while algorithms may be useful in some contexts and help to solve problems, it seems those relating to predicting real life have a long way to go to being safe and trusted for use. As “ethics” are located in time, place and social norms, the authors suggest that in the context of public administration, laws on human rights, statutory administrative functions, and data protection — all within the principles of the rule of law — provide the basis for appraising the use of algorithms, with maladministration being the primary concern rather than a breach of “ethics”.

Keywords: Algorithms, Predictive Analytics, Data Analytics, Artificial Intelligence, AI, Machine Learning, Automated Decision Making, Government Services, Public Sector, Public Administration, Ethics, Trust, Data Protection, Law, Accuracy, Bias, Transparency, Explainability, Accountability

Suggested Citation

Waller, Madeleine and Waller, Paul, Why Predictive Algorithms are So Risky for Public Sector Bodies (October 21, 2020). Available at SSRN: https://ssrn.com/abstract=3716166 or http://dx.doi.org/10.2139/ssrn.3716166

Madeleine Waller (Contact Author)

King’s College London ( email )

London, England WC2R 2LS
United Kingdom

Paul Waller

University of Bradford ( email )

Faculty of Management, Law and Social Sciences
Bradford, West Yorkshire BD7 1DP
United Kingdom
07973 910737 (Phone)

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Abstract Views
PlumX Metrics