Governance, Risk, and Artificial Intelligence
Mannes, A. (2020). Governance, Risk, and Artificial Intelligence. AI Magazine, 41(1), 61-69. DOI:10.1609/aimag.v41i1.5200
15 Pages Posted: 17 Jun 2020
Date Written: May 22, 2020
Abstract
Artificial intelligence, whether embodied as robots or Internet of Things, or disembodied as intelligent agents or decision-support systems, can enrich the human experience. It will also fail and cause harms, including physical injury and financial loss as well as more subtle harms such as instantiating human bias or undermining individual dignity. These failures could have a disproportionate impact because strange, new, and unpredictable dangers may lead to public discomfort and rejection of artificial intelligence. Two possible approaches to mitigating these risks are the hard power of regulating artificial intelligence, to ensure it is safe, and the soft power of risk communication, which engages the public and builds trust. These approaches are complementary and both should be implemented as artificial intelligence becomes increasingly prevalent in daily life.
Keywords: artificial intelligence, risk, governance, technology policy
Suggested Citation: Suggested Citation