Algorithmic Transparency and Decision-Making Accountability: Thoughts for Buying Machine Learning Algorithms

Jake Goldenfein, 'Algorithmic Transparency and Decision-Making Accountability: Thoughts for buying machine learning algorithms' in Office of the Victorian Information Commissioner (ed), Closer to the Machine: Technical, Social, and Legal aspects of AI (2019)

25 Pages Posted: 9 Sep 2019

See all articles by Jake Goldenfein

Jake Goldenfein

Melbourne Law School - University of Melbourne

Date Written: August 31, 2019

Abstract

There has been a great deal of research on how to achieve algorithmic accountability and transparency in automated decision-making systems - especially for those used in public governance. However, good accountability in the implementation and use of automated decision-making systems is far from simple. It involves multiple overlapping institutional, technical, and political considerations, and becomes all the more complex in the context of machine learning based, rather than rule based, decision systems. This chapter argues that relying on human oversight of automated systems, so called ‘human-in-the-loop’ approaches, is entirely deficient, and suggests addressing transparency and accountability during the procurement phase of machine learning systems - during their specification and parameterisation - is absolutely critical. In a machine learning based automated decision system, the accountability typically associated with a public official making a decision has already been displaced into the actions and decisions of those creating the system - the bureaucrats and engineers involved in building the relevant models, curating the datasets, and implementing a system institutionally. But what should those system designers be thinking about and asking for when specifying those systems?

There are a lot of accountability mechanisms available for system designers to consider, including new computational transparency mechanisms, ‘fairness’ and non-discrimination, and ‘explainability’ of decisions. If an official specifies for a system to be transparent, fair, or explainable, however, it is important that they understand the limitations of such a specification in the context of machine learning. Each of these approaches is fraught with risks, limitations, and the challenging political economy of technology platforms in government. Without understand the complexities and limitations of those accountability and transparency ideas, they risk disempowering public officials in the face of private industry technology vendors, who use trade secrets and market power in deeply problematic ways, as well as producing deficient accountability outcomes. This chapter therefore outlines the risks associated with corporate cooption of those transparency and accountability mechanisms, and suggests that significant resources must be invested in developing the necessary skills in the public sector for deciding whether a machine learning system is useful and desirable, and how it might be made as accountable and transparent as possible.

Keywords: algorithmic accountability, machine learning, procurement, automated decision-making

Suggested Citation

Goldenfein, Jake, Algorithmic Transparency and Decision-Making Accountability: Thoughts for Buying Machine Learning Algorithms (August 31, 2019). Jake Goldenfein, 'Algorithmic Transparency and Decision-Making Accountability: Thoughts for buying machine learning algorithms' in Office of the Victorian Information Commissioner (ed), Closer to the Machine: Technical, Social, and Legal aspects of AI (2019), Available at SSRN: https://ssrn.com/abstract=3445873

Jake Goldenfein (Contact Author)

Melbourne Law School - University of Melbourne ( email )

185 Pelham Street
Melbourne, VIC 3010
Australia

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
1,754
Abstract Views
6,140
Rank
20,234
PlumX Metrics