Governance, Risk, and Artificial Intelligence

Mannes, A. (2020). Governance, Risk, and Artificial Intelligence. AI Magazine, 41(1), 61-69. DOI:10.1609/aimag.v41i1.5200

15 Pages Posted: 17 Jun 2020

See all articles by Aaron Mannes

Aaron Mannes

American Association for the Advancement of Science; University of Maryland - School of Public Policy

Date Written: May 22, 2020

Abstract

Artificial intelligence, whether embodied as robots or Internet of Things, or disembodied as intelligent agents or decision-support systems, can enrich the human experience. It will also fail and cause harms, including physical injury and financial loss as well as more subtle harms such as instantiating human bias or undermining individual dignity. These failures could have a disproportionate impact because strange, new, and unpredictable dangers may lead to public discomfort and rejection of artificial intelligence. Two possible approaches to mitigating these risks are the hard power of regulating artificial intelligence, to ensure it is safe, and the soft power of risk communication, which engages the public and builds trust. These approaches are complementary and both should be implemented as artificial intelligence becomes increasingly prevalent in daily life.

Keywords: artificial intelligence, risk, governance, technology policy

Suggested Citation

Mannes, Aaron and Mannes, Aaron, Governance, Risk, and Artificial Intelligence (May 22, 2020). Mannes, A. (2020). Governance, Risk, and Artificial Intelligence. AI Magazine, 41(1), 61-69. DOI:10.1609/aimag.v41i1.5200 , Available at SSRN: https://ssrn.com/abstract=3608328

Aaron Mannes (Contact Author)

University of Maryland - School of Public Policy ( email )

2101 Van Munching Hall
College Park, MD 20742
United States

American Association for the Advancement of Science ( email )

Washington, DC 20005
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
255
Abstract Views
892
Rank
232,421
PlumX Metrics