Regulating Artificial Intelligence in Finance: Putting the Human in the Loop
43 Sydney Law Journal 43 (2021)
41 Pages Posted: 22 Apr 2021 Last revised: 26 Oct 2023
Date Written: April 1, 2021
This article develops a framework for understanding and addressing the increasing role of artificial intelligence (‘AI’) in finance. It focuses on human responsibility as central to addressing the AI ‘black box’ problem — that is, the risk of an AI producing undesirable results that are unrecognised or unanticipated due to people’s difficulties in understanding the internal workings of an AI or as a result of the AI’s independent operation outside human supervision or involvement. After mapping the various use cases of AI in finance and explaining its rapid development, we highlight the range of potential issues and regulatory challenges concerning financial services AI and the tools available to address them. We argue that the most effective regulatory approaches to addressing the role of AI in finance bring humans into the loop through personal responsibility regimes, thus eliminating the black box argument as a defence to responsibility and legal liability for AI operations and decisions.
Keywords: artificial intelligence; machine learning; financial regulation; black box
Suggested Citation: Suggested Citation