Survey Assessing Risks from Artificial Intelligence (Technical Report)

27 Pages Posted: 8 Apr 2024 Last revised: 9 Jul 2024

See all articles by Alexander Saeri

Alexander Saeri

University of Queensland; Ready Research

Michael Noetel

University of Queensland - School of Psychology

Jessica Graham

University of Queensland

Date Written: March 7, 2024

Abstract

The development and use of AI technologies is accelerating. Across 2022 and 2023, new large-scale models have been announced monthly, and are achieving increasingly complex and general tasks; this trend continues in 2024 with Google DeepMind Gemini, OpenAI Sora, and others.

Experts in AI forecast that development of powerful AI models could lead to radical changes in wealth, health, and power on a scale comparable to the nuclear and industrial revolutions.

Addressing the risks and harms from these changes requires effective AI governance: forming robust norms, policies, laws, processes and institutions to guide good decision-making about AI development, deployment and use. Effective governance is especially crucial for managing extreme or catastrophic risks from AI that are high impact and uncertain, such as harm from misuse, accident or loss of control.

Understanding public beliefs and expectations about AI risks and their possible responses is important for ensuring that the ethical, legal, and social implications of AI are addressed through effective governance.

We conducted the Survey Assessing Risks from AI (SARA) to generate ‘evidence for action’, to help public and private actors make the decisions needed for safer AI development and use.

This survey investigates public concerns about 14 different risks from AI, from AI being used to spread fake and harmful content online, to AI being used for the creation of biological and chemical weapons; public support for AI development and regulation; and priority governance actions to address risks from AI (with a focus on government action).

Keywords: artificial intelligence, ai governance, public opinion research, ai policy

Suggested Citation

Saeri, Alexander and Noetel, Michael and Graham, Jessica, Survey Assessing Risks from Artificial Intelligence (Technical Report) (March 7, 2024). Available at SSRN: https://ssrn.com/abstract=4750953 or http://dx.doi.org/10.2139/ssrn.4750953

Alexander Saeri (Contact Author)

University of Queensland ( email )

St Lucia
Brisbane, Queensland 4072
Australia

Ready Research ( email )

Melbourne
Australia

Michael Noetel

University of Queensland - School of Psychology ( email )

Jessica Graham

University of Queensland ( email )

St Lucia
Brisbane, Queensland 4072
Australia

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
697
Abstract Views
1,496
Rank
81,468
PlumX Metrics