Managing Corporations’ Risk in Adopting Artificial Intelligence: A Corporate Responsibility Paradigm
(2021) 19 Washington University Global Studies Law Review (forthcoming)
33 Pages Posted: 25 Feb 2021
Date Written: September 1, 2020
Machine learning (ML) raises issues of risk for corporate and commercial use that are distinct from the legal risks involved in deploying robots that may be more deterministic in nature. Such issues of risk relate to what data is being input for the learning processes for ML, the risks of bias, and hidden, sub-optimal assumptions; how such data is processed by ML to reach its ‘outcome,’ leading sometimes to perverse results such as unexpected errors, harm, difficult choices, and even sub-optimal behavioural phenomena; and who should be accountable for such risks. While extant literature provides rich discussion of these issues, there are only emerging regulatory frameworks and soft law in the form of ethical principles to guide corporations navigating this area of innovation.
This article focuses on corporations that deploy ML, rather than on producers of ML innovations, in order to chart a framework for guiding strategic corporate decisions in adopting ML. We argue that such a framework necessarily integrates corporations’ legal risks and their broader accountability to society. The navigation of ML innovations is not carried out within a ‘compliance landscape’ for corporations, given that the laws and regulations governing corporations’ use of ML are yet emerging. Corporations’ deployment of ML is being scrutinised by the industry, stakeholders, and broader society as governance initiatives are being developed in a number of bottom-up quarters. We argue that corporations should frame their strategic deployment of ML innovations within a ‘thick and broad’ paradigm of corporate responsibility that is inextricably connected to business-society relations.
Keywords: machine learning, artificial intelligence, regulation, responsibility
Suggested Citation: Suggested Citation