A Disclosure-Based Approach to Regulating AI in Corporate Governance
22 Pages Posted: 10 Jan 2022
Date Written: January 7, 2022
Abstract
The use of technology, including artificial intelligence (AI), in corporate governance has been expanding, as corporations have begun to use AI systems for various governance functions such as effecting board appointments, enabling board monitoring by processing large amounts of data and even helping with whistle blowing, all of which address the agency problems present in modern corporations. On the other hand, the use of AI in corporate governance also presents significant risks. These include privacy and security issues, the 'black box problem' or the lack of transparency with AI decision-making, and undue power conferred on those who control decision-making regarding the deployment of specific AI technologies.
In this paper, we explore the possibility of deploying a disclosure-based approach as a regulatory tool to address the risks emanating from the use of AI in corporate governance. Specifically, we examine whether existing securities laws mandate corporate boards to disclose whether they rely on AI in their decision-making process. Not only could such disclosure obligations ensure adequate transparency for the various corporate constituents, but they may also incentivize boards to pay sufficient regard to the limitations or risks of AI in corporate governance. At the same time, such a requirement will not constrain companies from experimenting with the potential uses of AI in corporate governance. Normatively, and given the likelihood of greater use of AI in corporate governance moving forward, we also explore the merits of devising a specific disclosure regime targeting the intersection between AI and corporate governance.
Keywords: Artificial intelligence, corporate governance, board of directors, disclosure, regulation
Suggested Citation: Suggested Citation