The Incompatible Incentives of Private Sector AI

Oxford Handbook of Ethics of Artificial Intelligence, Oxford University Press 2019

21 Pages Posted: 2 May 2019

Date Written: March 31, 2019

Abstract

Algorithms that sort people into categories are plagued by incompatible incentives. While more accurate algorithms may address problems of statistical bias and unfairness, they cannot solve the ethical challenges that arise from incompatible incentives.

Subjects of algorithmic decisions seek to optimize their outcomes, but such efforts may degrade the accuracy of the algorithm. To maintain their accuracy, algorithms must be accompanied by supplementary rules: “guardrails” that dictate the limits of acceptable behaviour by subjects. Algorithm owners are drawn into taking on the tasks of governance, managing and validating the behaviour of those who interact with their systems.

The governance role offers temptations to indulge in regulatory arbitrage. If governance is left to algorithm owners, it may lead to arbitrary and restrictive controls on individual behaviour. The goal of algorithmic governance by automated decision systems, social media recommender systems, and rating systems is a mirage, retreating into the distance whenever we seem to approach it.

Keywords: algorithms, incentives, incentive-incompatibility

Suggested Citation

Slee, Tom, The Incompatible Incentives of Private Sector AI (March 31, 2019). Oxford Handbook of Ethics of Artificial Intelligence, Oxford University Press 2019, Available at SSRN: https://ssrn.com/abstract=3363342 or http://dx.doi.org/10.2139/ssrn.3363342

Tom Slee (Contact Author)

SAP Canada ( email )

445 Wes Graham Way
Waterloo, Ontario N2L 6R2
Canada

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
406
Abstract Views
1,740
Rank
116,803
PlumX Metrics