Artificial Intelligence: The Very Human Dangers of Dysfunctional Design and Autocratic Corporate Governance
50 Pages Posted: 6 May 2019
Date Written: May 3, 2019
Abstract
This article attempts to get to the heart of some of the general misunderstanding and misapplication of Artificial Intelligence (AI) decision-making technology and proposes a regulatory model to place public rather than private interest at the heart of AI regulation. As such, the article proposes that a human rather than a technology lens is needed to cut through much of the confusion and misunderstanding surrounding AI. It examines the nature of AI decision-making focusing on its human design and impact, and concludes that technology is not the root of problematic outcomes in the area but rather flawed human design and implementation. It then broadens the human-focused lens to examine private sector tech governance, extending the argument that AI presents a human rather than a technological problem; it examines the ownership and control of key AI tech companies and finds that autocratic models of corporate governance abound. It concludes that despite the unfortunate deregulatory instincts of the US and UK governments with regard to technology, AI should be treated in a similar manner to pharmaceutical products by introducing public interest regulation through the medium of a state regulatory body, and that changes to the corporate governance regulation of tech companies are necessary.
Suggested Citation: Suggested Citation