Effective Mitigations for Systemic Risks from General-Purpose AI

78 Pages Posted: 8 Jan 2025

See all articles by Risto Uuk

Risto Uuk

Future of Life Institute; KU Leuven

Annemieke Brouwer

Future of Life Institute

Tim Schreier

Future of Life Institute

Noemi Dreksler

Centre for the Governance of AI

Valeria Pulignano

KU Leuven - Centre for Sociological Research

Rishi Bommasani

Stanford University

Date Written: November 14, 2024

Abstract

The systemic risks posed by general-purpose AI models are a growing concern, yet the effectiveness of mitigations remains underexplored. Previous research has proposed frameworks for risk mitigation, but has left gaps in our understanding of the perceived effectiveness of measures for mitigating systemic risks. Our study addresses this gap by evaluating how experts perceive different mitigations that aim to reduce the systemic risks of general-purpose AI models. We surveyed 76 experts whose expertise spans AI safety; critical infrastructure; democratic processes; chemical, biological, radiological, and nuclear risks (CBRN); and discrimination and bias. Among 27 mitigations identified through a literature review, we find that a broad range of risk mitigation measures are perceived as effective in reducing various systemic risks and technically feasible by domain experts. In particular, three mitigation measures stand out: safety incident reports and security information sharing, third-party pre-deployment model audits, and pre-deployment risk assessments. These measures show both the highest expert agreement ratings (>60%) across all four risk areas and are most frequently selected in experts’ preferred combinations of measures (>40%). The surveyed experts highlighted that external scrutiny, proactive evaluation and transparency are key principles for effective mitigation of systemic risks. We provide policy recommendations for implementing the most promising measures, incorporating the qualitative contributions from experts. These insights should inform regulatory frameworks and industry practices for mitigating the systemic risks associated with general-purpose AI.

Keywords: Systemic risk mitigation, general-purpose AI, expert survey, risk management

JEL Classification: O33, O38, D81, K20, D62

Suggested Citation

Uuk, Risto and Brouwer, Annemieke and Schreier, Tim and Dreksler, Noemi and Pulignano, Valeria and Bommasani, Rishi, Effective Mitigations for Systemic Risks from General-Purpose AI (November 14, 2024). Available at SSRN: https://ssrn.com/abstract=5021463 or http://dx.doi.org/10.2139/ssrn.5021463

Risto Uuk (Contact Author)

Future of Life Institute ( email )

KU Leuven ( email )

Oude Markt 13
Leuven, Vlaams-Brabant 3000
Belgium

Annemieke Brouwer

Future of Life Institute

United States

Tim Schreier

Future of Life Institute

Noemi Dreksler

Centre for the Governance of AI ( email )

United Kingdom

Valeria Pulignano

KU Leuven - Centre for Sociological Research

Oude Markt 13, 3000
Leuven
Belgium

Rishi Bommasani

Stanford University ( email )

367 Panama St
Stanford, CA 94305
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
963
Abstract Views
4,101
Rank
54,865
PlumX Metrics