Algorithmic Reason-Giving, Arbitrary-and-Capricious Review, and the Need for a Clear Normative Baseline

57 Pages Posted: 1 Dec 2023 Last revised: 6 Feb 2024

Date Written: November 21, 2023


Federal agencies have caught the AI bug. A December 2023 report by the Government Accountability Office (GAO) found that 20 of 23 federal agencies surveyed reported using some form of AI, with about 200 current use cases for algorithms and about 1,000 more in the planning phase. These agencies are using algorithms in all aspects of administration, including rulemaking, adjudication, and enforcement. The risks of AI are well documented. Previous work has shown that algorithms can be, among other things, biased and prone to error. However, perhaps no problem poses a more serious threat to the use of algorithms by agencies than the fact that algorithms can be opaque, which means that it can be difficult to understand how an algorithm works and why it reaches certain results. That is because opacity compromises reason-giving, a basic pillar of administrative governance. Inadequate reason-giving poses legal problems for agencies because the reasons agencies give for their decisions form the basis of judicial review. Without adequate reason-giving, agency action will fail arbitrary-and-capricious review under the Administrative Procedure Act. Inadequate reason-giving poses normative problems, too, since reason-giving promotes quality decision-making, fosters accountability, and helps agencies respect parties’ dignitary interests.

This Article considers whether agencies can use algorithms without running afoul of standards, both legal and normative, for reason-giving. It begins by disaggregating algorithmic reason-giving, explaining that algorithmic reason-giving includes both the reasons an agency gives for an algorithm’s design (systemic reason-giving) and the reasons an agency gives for an individual decision when the decision-making process involves an algorithm (case-specific reason-giving). It then evaluates systemic reason-giving and case-specific reason-giving in turn. Once the normative assessment is complete, this Article considers its implications for arbitrary-and-capricious review, concluding that at least some algorithms should pass judicial muster. The Article finishes by offering a framework that courts can use when evaluating whether the use of an algorithm is arbitrary and capricious and that agencies can use to decide whether to create an algorithm in the first place.

Although understanding the relationship between algorithms and reason-giving is important, this Article’s true aim is broader. It seeks to reframe debates over agencies’ use of AI by emphasizing that the baseline against which these algorithms should be compared is not some idealized human decision-maker, but rather the various kinds of policies–rules, internal procedures, guidance–that agencies have used since their inception to promote core administrative values like consistency, accuracy, and efficiency. The comparison between algorithms and policies better captures the role algorithms currently play in administrative governance, gives proper weight to the reasons agencies have for turning to algorithms in the first place, and helps us see how algorithms do and do not fit within the existing structures of administrative law. At bottom, comparing algorithms to policies reminds us that the tension between individualized consideration and centralized bureaucratic management is endemic to agency administration. At most, algorithms have given this tension a new flavor. Make no mistake: this tension cannot be eliminated, only managed. Algorithmic reason-giving is a case in point.

Suggested Citation

Averill, Cameron, Algorithmic Reason-Giving, Arbitrary-and-Capricious Review, and the Need for a Clear Normative Baseline (November 21, 2023). Available at SSRN: or

Cameron Averill (Contact Author)

Yale Law School ( email )

127 Wall Street
New Haven, CT 06510
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Abstract Views
PlumX Metrics