The Governance Challenges with Addressing Fairness in AI

10 Pages Posted: 5 Jun 2023

See all articles by Will Rinehart

Will Rinehart

Center for Growth and Opportunity at Utah State University

Date Written: May 19, 2023

Abstract

This paper responds to the National Telecommunications and Information Administration's request for comment in Docket NTIA-2023-0001-0001. In this proceeding, the NTIA sought to understand how commercial data collection and use, especially through artificial intelligence (AI) methods, might adversely affect underserved or marginalized communities through disparate impacts. Importantly, it wanted to also understand how “specific data collection and use practices potentially create or reinforce discriminatory obstacles for marginalized groups regarding access to key opportunities, such as employment, housing, education, healthcare, and access to credit.”

What the NTIA seeks to tackle is a wicked problem in Rittel and Webber’s classic definition. This paper argues for a twist on that theme. Wicked problems, which plague public policy and planning, are distinct from natural problems because “natural problems are definable and separable and may have solutions that are findable [while] the problems of governmental planning and especially those of social or policy planning are ill-defined.” But the case of fairness in AI shows that they are over-defined. The reason why “social problems are never solved,” they “are only resolved-over and over again” is because there are many possible solutions.

When the NTIA issues its final report, it should resist the tendency to reduce wicked problems into natural ones. Rather, the agency should recognize, as one report described it, the existence of a hidden universe of uncertainty about AI models. To understand this problem holistically:

• The first section explains how data-generating processes can create legibility but never solve the problem of illegibility.
• The second section explains what is meant by bias, breaks down the problems in model selection, and walks through the problem of defining fairness.
• The third section explores why people have a distaste for the kind of moral calculations made by machines and why we should focus on impact.

Keywords: AI, bias, AI audits, algorithms, algorithmic decision making, public policy, alignment, artificial intelligence

JEL Classification: K00, D73, H00

Suggested Citation

Rinehart, William, The Governance Challenges with Addressing Fairness in AI (May 19, 2023). Available at SSRN: https://ssrn.com/abstract=4453683 or http://dx.doi.org/10.2139/ssrn.4453683

William Rinehart (Contact Author)

Center for Growth and Opportunity at Utah State University ( email )

3525 Old Main Hill
Logan, UT 84322
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
60
Abstract Views
334
Rank
723,207
PlumX Metrics