AI, Algorithms, and Awful Humans
96 Fordham Law Review (forthcoming 2024)
19 Pages Posted: 6 Nov 2023
Date Written: October 16, 2023
This Essay critiques a set of arguments often made to justify the use of AI and algorithmic decision-making technologies. These arguments all share a common premise – that human decision-making is so deeply flawed that augmenting it or replacing it with machines will be an improvement.
In this Essay, we argue that these arguments fail to account for the full complexity of human and machine decision-making when it comes to deciding about humans. Making decisions about humans involves special emotional and moral considerations that algorithms are not yet prepared to make – and might never be able to make.
It is wrong to view machines as deciding like humans do, but better because they are supposedly cleansed of bias. Machines decide fundamentally differently, and bias often persists. These differences are especially pronounced when decisions have a moral or value judgment or involve human lives and behavior. Some of the human dimensions to decision-making that cause great problems also have great virtues. Additionally, algorithms often rely too much on quantifiable data to the exclusion of qualitative data. Whereas certain matters might be readily reducible to quantifiable data, such as the weather, human lives are far more complex. Having humans oversee machines is not a cure; humans often perform badly when reviewing algorithmic output.
We contend that algorithmic decision-making is being relied upon too eagerly and with insufficient skepticism. For decisions about humans, there are important considerations that must be better appreciated before these decisions are delegated in whole or in part to machines.
Keywords: AI, artificial intelligence, privacy, algorithmic decision-making, machine learning
JEL Classification: K00, C00, C10
Suggested Citation: Suggested Citation