Download this Paper Open PDF in Browser

Accountable Algorithms

66 Pages Posted: 16 Apr 2016 Last revised: 20 Nov 2016

Joshua A. Kroll

Center for Information Technology Policy, Princeton University

Joanna Huey

Princeton University - Center for Information Technology Policy

Solon Barocas

Cornell University

Edward W. Felten

Princeton University - Center for Information Technology Policy; Princeton University - Woodrow Wilson School of Public and International Affairs; Princeton University - Department of Computer Science

Joel R. Reidenberg

Fordham University School of Law

David G. Robinson

Georgetown University Law Center; Upturn

Harlan Yu

Princeton University - Center for Information Technology Policy; Princeton University - Department of Computer Science; Stanford University - Stanford Law School Center for Internet and Society

Date Written: March 2, 2016

Abstract

Many important decisions historically made by people are now made by computers. Algorithms count votes, approve loan and credit card applications, target citizens or neighborhoods for police scrutiny, select taxpayers for an IRS audit, and grant or deny immigration visas.

The accountability mechanisms and legal standards that govern such decision processes have not kept pace with technology. The tools currently available to policymakers, legislators, and courts were developed to oversee human decision-makers and often fail when applied to computers instead: for example, how do you judge the intent of a piece of software? Additional approaches are needed to make automated decision systems — with their potentially incorrect, unjustified or unfair results — accountable and governable. This Article reveals a new technological toolkit to verify that automated decisions comply with key standards of legal fairness.

We challenge the dominant position in the legal literature that transparency will solve these problems. Disclosure of source code is often neither necessary (because of alternative techniques from computer science) nor sufficient (because of the complexity of code) to demonstrate the fairness of a process. Furthermore, transparency may be undesirable, such as when it permits tax cheats or terrorists to game the systems determining audits or security screening.

The central issue is how to assure the interests of citizens, and society as a whole, in making these processes more accountable. This Article argues that technology is creating new opportunities — more subtle and flexible than total transparency — to design decision-making algorithms so that they better align with legal and policy objectives. Doing so will improve not only the current governance of algorithms, but also — in certain cases — the governance of decision-making in general. The implicit (or explicit) biases of human decision-makers can be difficult to find and root out, but we can peer into the “brain” of an algorithm: computational processes and purpose specifications can be declared prior to use and verified afterwards.

The technological tools introduced in this Article apply widely. They can be used in designing decision-making processes from both the private and public sectors, and they can be tailored to verify different characteristics as desired by decision-makers, regulators, or the public. By forcing a more careful consideration of the effects of decision rules, they also engender policy discussions and closer looks at legal standards. As such, these tools have far-reaching implications throughout law and society.

Part I of this Article provides an accessible and concise introduction to foundational computer science concepts that can be used to verify and demonstrate compliance with key standards of legal fairness for automated decisions without revealing key attributes of the decision or the process by which the decision was reached. Part II then describes how these techniques can assure that decisions are made with the key governance attribute of procedural regularity, meaning that decisions are made under an announced set of rules consistently applied in each case. We demonstrate how this approach could be used to redesign and resolve issues with the State Department’s diversity visa lottery. In Part III, we go further and explore how other computational techniques can assure that automated decisions preserve fidelity to substantive legal and policy choices. We show how these tools may be used to assure that certain kinds of unjust discrimination are avoided and that automated decision processes behave in ways that comport with the social or legal standards that govern the decision. We also show how algorithmic decision-making may even complicate existing doctrines of disparate treatment and disparate impact, and we discuss some recent computer science work on detecting and removing discrimination in algorithms, especially in the context of big data and machine learning. And lastly in Part IV, we propose an agenda to further synergistic collaboration between computer science, law and policy to advance the design of automated decision processes for accountability.

Keywords: algorithms, governance, accountability, internet, cyber, technology, bias, discrimination, computational methods, law, big data, computer science, code

JEL Classification: C6, K00, K3, K4, K1

Suggested Citation

Kroll, Joshua A. and Huey, Joanna and Barocas, Solon and Felten, Edward W. and Reidenberg, Joel R. and Robinson, David G. and Yu, Harlan, Accountable Algorithms (March 2, 2016). University of Pennsylvania Law Review, Vol. 165, 2017 Forthcoming; Fordham Law Legal Studies Research Paper No. 2765268. Available at SSRN: https://ssrn.com/abstract=2765268

Joshua Kroll

Center for Information Technology Policy, Princeton University ( email )

Sherrerd Hall
Princeton University
Princeton, NJ 08544
United States

HOME PAGE: http://jkroll.com

Joanna Huey

Princeton University - Center for Information Technology Policy ( email )

303 Sherrerd Hall
Princeton, NJ 08544
United States

HOME PAGE: http://citp.princeton.edu

Solon Barocas

Cornell University ( email )

Ithaca, NY 14853
United States

Edward Felten

Princeton University - Center for Information Technology Policy ( email )

Sherrerd Hall, Third Floor
Princeton, NJ 08544
United States

Princeton University - Woodrow Wilson School of Public and International Affairs ( email )

Princeton University
Princeton, NJ 08544-1021
United States

Princeton University - Department of Computer Science ( email )

35 Olden Street
Princeton, NJ 08540
United States

Joel Reidenberg (Contact Author)

Fordham University School of Law ( email )

140 West 62nd Street
New York, NY 10023
United States
212-636-6843 (Phone)
212-930-8833 (Fax)

HOME PAGE: http://faculty.fordham.edu/reidenberg

David Robinson

Georgetown University Law Center ( email )

600 New Jersey Avenue, NW
Washington, DC 20001
United States

Upturn ( email )

1015 15th St, NW Suite 600
Washington, DC 20009
United States

HOME PAGE: http://teamupturn.com

Harlan Yu

Princeton University - Center for Information Technology Policy ( email )

Sherrerd Hall, 3rd Floor
Princeton, NJ 08544
United States

Princeton University - Department of Computer Science ( email )

35 Olden Street
Princeton, NJ 08540
United States

Stanford University - Stanford Law School Center for Internet and Society ( email )

559 Nathan Abbott Way
Stanford, CA 94305-8610
United States

Paper statistics

Downloads
2,414
Rank
4,237
Abstract Views
9,858