Military Artificial Intelligence as Contributor to Global Catastrophic Risk

The Era of Global Risk (2023). (eds. SJ Beard, Martin Rees, Catherine Richards & Clarissa Rios-Rojas). Open Book Publishers.

36 Pages Posted: 24 May 2022 Last revised: 22 Dec 2022

See all articles by Matthijs M. Maas

Matthijs M. Maas

Institute for Law & AI; Centre for the Study of Existential Risk, University of Cambridge; University of Cambridge - Leverhulme Centre for the Future of Intelligence

Kayla Matteucci

Centre for the Study of Existential Risk

Di Cooke

Centre for the Study of Existential Risk; University of Cambridge - Leverhulme Centre for the Future of Intelligence; Centre for the Governance of AI

Date Written: May 22, 2022

Abstract

Recent years have seen growing attention for the use of AI technologies in warfare, which has been rapidly advancing. This chapter explores in what ways such military AI technologies might contribute to Global Catastrophic Risks (GCR). After reviewing the GCR field’s limited previous engagement with military AI, and giving an overview of recent advances in military AI, this chapter focuses on two risk scenarios that have been proposed. First, we discuss arguments around the use of swarms of Lethal Autonomous Weapons Systems, and suggest that while these systems are concerning, they appear not yet likely to be a GCR in the near-term, on the basis of current and anticipated production limits and costs which make these systems still uncompetitive with extant systems for mass destruction. Second, we delve into the intersection of military AI and nuclear weapons, which we argue has a significantly higher GCR potential. We review historical debates over when, where, and why nuclear weapons could lead to GCR, along with recent geopolitical developments that could raise these risks further. We then outline six ways in which the use of AI systems in-, around-, or against- nuclear weapons and their command infrastructures could increase the likelihood of nuclear escalation and global catastrophe. The chapter concludes with suggestions for a research agenda that can gain a more comprehensive and multidisciplinary understanding of the potential risks from military AI, both today and in the future.

Keywords: Artificial intelligence, Military AI, autonomous weapons, global catastrophic risk, nuclear weapons, nuclear war

Suggested Citation

Maas, Matthijs M. and Matteucci, Kayla and Cooke, Di, Military Artificial Intelligence as Contributor to Global Catastrophic Risk (May 22, 2022). The Era of Global Risk (2023). (eds. SJ Beard, Martin Rees, Catherine Richards & Clarissa Rios-Rojas). Open Book Publishers., Available at SSRN: https://ssrn.com/abstract=4115010 or http://dx.doi.org/10.2139/ssrn.4115010

Matthijs M. Maas (Contact Author)

Institute for Law & AI ( email )

Cambridge, MA
United States

Centre for the Study of Existential Risk, University of Cambridge ( email )

Trinity Ln
Cambridge, CB2 1TN
United Kingdom

University of Cambridge - Leverhulme Centre for the Future of Intelligence ( email )

United Kingdom

Kayla Matteucci

Centre for the Study of Existential Risk ( email )

Di Cooke

Centre for the Study of Existential Risk ( email )

University of Cambridge - Leverhulme Centre for the Future of Intelligence ( email )

United Kingdom

Centre for the Governance of AI ( email )

United Kingdom

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
956
Abstract Views
3,080
Rank
51,551
PlumX Metrics