The Incompatible Incentives of Private Sector AI

Oxford Handbook of Ethics of Artificial Intelligence, Oxford University Press 2019

21 Pages Posted: 2 May 2019

Date Written: March 31, 2019

Abstract

Algorithms that sort people into categories are plagued by incompatible incentives. While more accurate algorithms may address problems of statistical bias and unfairness, they cannot solve the ethical challenges that arise from incompatible incentives.

Subjects of algorithmic decisions seek to optimize their outcomes, but such efforts may degrade the accuracy of the algorithm. To maintain their accuracy, algorithms must be accompanied by supplementary rules: “guardrails” that dictate the limits of acceptable behaviour by subjects. Algorithm owners are drawn into taking on the tasks of governance, managing and validating the behaviour of those who interact with their systems.

The governance role offers temptations to indulge in regulatory arbitrage. If governance is left to algorithm owners, it may lead to arbitrary and restrictive controls on individual behaviour. The goal of algorithmic governance by automated decision systems, social media recommender systems, and rating systems is a mirage, retreating into the distance whenever we seem to approach it.

Keywords: algorithms, incentives, incentive-incompatibility

Suggested Citation

Slee, Tom, The Incompatible Incentives of Private Sector AI (March 31, 2019). Oxford Handbook of Ethics of Artificial Intelligence, Oxford University Press 2019. Available at SSRN: https://ssrn.com/abstract=3363342 or http://dx.doi.org/10.2139/ssrn.3363342

Tom Slee (Contact Author)

SAP SE ( email )

Dietmar-Hopp-Allee 16
Waterloo, Ontario N2L 6R2
Canada

Register to save articles to
your library

Register

Paper statistics

Downloads
55
rank
363,900
Abstract Views
408
PlumX Metrics