BEAT Unintended Bias in Personalized Policies

37 Pages Posted: 21 Aug 2021 Last revised: 24 Aug 2021

See all articles by Eva Ascarza

Eva Ascarza

Harvard Business School

Ayelet Israeli

Harvard Business School - Marketing Unit

Date Written: August 19, 2021

Abstract

An inherent risk of algorithmic personalization is disproportionate targeting of individuals from certain groups (or demographic characteristics such as gender or race), even when the decision maker does not intend to discriminate based on those “protected” attributes. This unintended discrimination is often caused by underlying correlations in the data between protected attributes and other observed characteristics used by the algorithm (or machine learning (ML) tool) to create predictions and target individuals optimally. Because these correlations are hidden in high dimensional data, removing protected attributes from the database does not solve the discrimination problem; instead, removing those attributes often exacerbates the problem by making it undetectable and, in some cases, even increases the bias generated by the algorithm.

We propose the BEAT (Bias-Eliminating Adapted Trees) framework to address these issues. This framework allows decision makers to target individuals based on differences in their predicted behavior—hence capturing value from personalization—while ensuring that the allocation of resources is balanced with respect to the population. Essentially, the method only extracts heterogeneity in the data that is unrelated to those protected attributes. To do so, we build on the General Random Forest (GRF) framework (Wager and Athey 2018; Athey et al. 2019) and develop a targeting allocation that is “balanced” with respect to protected attributes. We validate the BEAT framework using simulations and an online experiment with N=3,146 participants. This framework can be applied to any type of allocation decision that is based on prediction algorithms, such as medical treatments, hiring decisions, product recommendations, or dynamic pricing.

Keywords: Algorithmic bias, personalization, targeting, generalized random forests (GRF), fairness, discrimination

JEL Classification: C53, C54, C55, M31

Suggested Citation

Ascarza, Eva and Israeli, Ayelet, BEAT Unintended Bias in Personalized Policies (August 19, 2021). Available at SSRN: https://ssrn.com/abstract=3908088 or http://dx.doi.org/10.2139/ssrn.3908088

Eva Ascarza (Contact Author)

Harvard Business School ( email )

Soldiers Field
Boston, MA 02163
United States

HOME PAGE: http://evaascarza.com

Ayelet Israeli

Harvard Business School - Marketing Unit ( email )

Soldiers Field
Boston, MA 02163
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
47
Abstract Views
441
PlumX Metrics