Consumer Finance and AI: The Death of Second Opinions?
59 Pages Posted: 15 Apr 2019 Last revised: 20 Dec 2020
Date Written: March 28, 2019
In today’s world, people routinely rely on the advice of algorithms for all aspects of their lives, from mundane tasks like choosing the most efficient navigation route home, to significant financial decisions regarding how to invest their retirement savings. Because of the ubiquity of algorithms, people have become increasingly comfortable relying on those algorithms—a tendency known as automation bias. This Article presents an original empirical study that explores automation bias in the area of consumer finance. The study confirms that when making consumer finance decisions, including making significant investment decisions, Americans significantly prefer following the recommendations of algorithms to those of human experts. Moreover, the study finds that even after poor performance of their investment as a result of following an algorithm’s advice, consumers continue to favor algorithms to human experts, and feel more confident that algorithms give them the better recommendation. This result, demonstrates that we view algorithms—especially those rooted in big data—as a superior authority.
Our increasing deference to algorithmic results is concerning because we are avoiding obtaining “a second opinion”—a term often used in the medical context—even when the first opinion comes from an algorithm that has made mistakes in the past. Although second opinions are costly and may not always be economically efficient, they are important—and even critical—in certain situations. For example, second opinions can be critical for high-stakes decisions, for decisions about which experts disagree and/or include many options, or in situations where the decision-maker does not like the outcome but is unqualified to evaluate the soundness of the first opinion. By reducing the acceptability of seeking second opinions, our algorithm-dependent society is nudging us to tone down human traits such as creativity, innovation, and critical thinking, and instead to blindly rely on the new experts—the algorithms, which are black boxes whose biases are difficult to assess.
Second opinions do not necessarily need to be human-formulated opinions. In the era of big data and AI, different algorithms that are based on dissimilar data and assumptions can offer second opinions and introduce more options to users. In fact, algorithmic second opinions may be more objective than human-formulated second opinions because a human reviewing the first algorithmic opinion is herself affected by automation bias. Given our significant automation bias, great care must be taken to ensure objective human second opinions.
As a conclusion, the Article suggests implementing cultural changes by hyper-nudging users to seek second opinions, including AI-based opinions, and requiring algorithmic auditing.
Suggested Citation: Suggested Citation