The Incompatible Incentives of Private Sector AI

Oxford Handbook of Ethics of Artificial Intelligence, Oxford University Press 2019

21 Pages Posted: 2 May 2019

See all articles by Tom Slee

Tom Slee

affiliation not provided to SSRN

Date Written: March 31, 2019

Abstract

Algorithms that sort people into categories are plagued by incompatible incentives. While more accurate algorithms may address problems of statistical bias and unfairness, they cannot solve the ethical challenges that arise from incompatible incentives.

Subjects of algorithmic decisions seek to optimize their outcomes, but such efforts may degrade the accuracy of the algorithm. To maintain their accuracy, algorithms must be accompanied by supplementary rules: “guardrails” that dictate the limits of acceptable behaviour by subjects. Algorithm owners are drawn into taking on the tasks of governance, managing and validating the behaviour of those who interact with their systems.

The governance role offers temptations to indulge in regulatory arbitrage. If governance is left to algorithm owners, it may lead to arbitrary and restrictive controls on individual behaviour. The goal of algorithmic governance by automated decision systems, social media recommender systems, and rating systems is a mirage, retreating into the distance whenever we seem to approach it.

Keywords: algorithms, incentives, incentive-incompatibility

Suggested Citation

Slee, Tom, The Incompatible Incentives of Private Sector AI (March 31, 2019). Oxford Handbook of Ethics of Artificial Intelligence, Oxford University Press 2019, Available at SSRN: https://ssrn.com/abstract=3363342 or http://dx.doi.org/10.2139/ssrn.3363342

Tom Slee (Contact Author)

affiliation not provided to SSRN

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
1,050
Abstract Views
3,118
Rank
45,164
PlumX Metrics