Recommender Algorithms Do No Harm ~90% But… An Exploratory Risk-Utility Meta-Analysis of Algorithmic Audits

32 Pages Posted: 2 May 2023 Last revised: 3 May 2023

See all articles by Martin Hilbert

Martin Hilbert

University of California, Davis

Arti Thakur

University of California, Davis - Computational Communication Research Lab

Feng Ji

University of Toronto

Pablo M. Flores

University of California, Davis

Xiaoya Zhang

University of Florida, Department of Family, Youth and Community Sciences

Jee Young Bhan

University of California, Davis

Patrick Bernhard

University of California, Davis - Department of Political Science

Date Written: April 23, 2023

Abstract

We obtain a quantitatively coarse-grained, but wide-ranging evaluation of the frequency recommender algorithms provide ‘good’ and ‘bad’ recommendations. We found 146 algorithmic audits from 32 studies that report fitting risk-utility statistics from YouTube, Google Search, Twitter, Facebook, TikTok, Amazon, and others. The vast majority of algorithmic recommendations do no harm (around 90 %), while about a quarter of recommendations safeguard humans from self-induced harm (‘do good’). The frequency of ‘bad’ recommendations is around 7-10 % on average, which poses a potential risk that is notably higher than risks posed by other consumer products. This average is remarkably robust across the audits and neither depends on the platform nor on the kind of harm (bias/ discrimination, mental health and child harm, misinformation, or political extremism). Algorithmic audits find negative feedback loops that lock users into spirals of ‘bad’ recommendations (or being ‘dragged down the rabbit hole’), but they find an even larger probability of positive spirals of ‘good recommendations’. Our analysis refrains from any qualitative judgment of the severity of different harms. As concerns for ‘AI alignment’ with human values grow, necessitating more algorithmic audits, our study offers preliminary figures for quantitative comparison to other contemporary risks of modern life.

Keywords: recommender algorithms, algorithmic auditing, machine behavior, meta-analysis, digital harms

Suggested Citation

Hilbert, Martin and Thakur, Arti and Ji, Feng and Flores, Pablo M. and Zhang, Xiaoya and Bhan, Jee Young and Bernhard, Patrick, Recommender Algorithms Do No Harm ~90% But… An Exploratory Risk-Utility Meta-Analysis of Algorithmic Audits (April 23, 2023). Available at SSRN: https://ssrn.com/abstract=4426783 or http://dx.doi.org/10.2139/ssrn.4426783

Martin Hilbert (Contact Author)

University of California, Davis ( email )

One Shields Avenue
Apt 153
Davis, CA 95616
United States

HOME PAGE: http://www.martinhilbert.net

Arti Thakur

University of California, Davis - Computational Communication Research Lab ( email )

One Shields Avenue
Davis, CA 95616
United States

Feng Ji

University of Toronto ( email )

105 St George Street
Toronto, Ontario M5S 3G8
Canada

Pablo M. Flores

University of California, Davis ( email )

One Shields Avenue
Apt 153
Davis, CA 95616
United States

Xiaoya Zhang

University of Florida, Department of Family, Youth and Community Sciences ( email )

PO Box 117165, 201 Stuzin Hall
Gainesville, FL 32610-0496
United States

Jee Young Bhan

University of California, Davis ( email )

One Shields Avenue
Apt 153
Davis, CA 95616
United States

Patrick Bernhard

University of California, Davis - Department of Political Science ( email )

469 Kerr Hall
One Shields Avenue
Davis, CA 95616
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
153
Abstract Views
433
Rank
305,147
PlumX Metrics