Open-Sourcing Highly Capable Foundation Models: An evaluation of risks, benefits, and alternative methods for pursuing open-source objectives

52 Pages Posted: 10 Oct 2023 Last revised: 25 Oct 2023

See all articles by Elizabeth Seger

Elizabeth Seger

Centre for the Governance of AI

Noemi Dreksler

Centre for the Governance of AI

Richard Moulange

Centre for the Governance of AI; University of Cambridge - MRC Biostatistics Unit

Emily Dardaman

Independent

Jonas Schuett

Centre for the Governance of AI; Goethe University Frankfurt; Institute for Law & AI

K. Wei

Centre for the Governance of AI; Harvard University - Harvard Law School

Christoph Winter

University of Cambridge - Faculty of Law; Harvard University; Institute for Law & AI

Mackenzie Arnold

Institute for Law & AI

Seán Ó hÉigeartaigh

University of Cambridge - Leverhulme Centre for the Future of Intelligence

Anton Korinek

University of Virginia; National Bureau of Economic Research (NBER)

Markus Anderljung

Centre for the Governance of AI

Ben Bucknall

Independent

Alan Chan

University of Montreal

Eoghan Stafford

Centre for the Governance of AI

Leonie Koessler

Centre for the Governance of AI

Aviv Ovadya

Thoughtful Technology Project

Ben Garfinkel

Centre for the Governance of AI

Emma Bluemke

Centre for the Governance of AI

Michael Aird

Institute for AI Policy & Strategy

Patrick Levermore

Institute for AI Policy & Strategy

Julian Hazell

University of Oxford - Oxford Internet Institute

Abhishek Gupta

Montreal AI Ethics Institute

Date Written: October 9, 2023

Abstract

Recent decisions by leading AI labs to either open-source their models or to restrict access to their models has sparked debate about whether, and how, increasingly capable AI models should be shared. Open-sourcing in AI typically refers to making model architecture and weights freely and publicly accessible for anyone to modify, study, build on, and use. This offers advantages such as enabling external oversight, accelerating progress, and decentralizing control over AI development and use. However, it also presents a growing potential for misuse and unintended consequences. This paper offers an examination of the risks and benefits of open-sourcing highly capable foundation models. While open-sourcing has historically provided substantial net benefits for most software and AI development processes, we argue that for some highly capable foundation models likely to be developed in the near future, open-sourcing may pose sufficiently extreme risks to outweigh the benefits. In such a case, highly capable foundation models should not be open-sourced, at least not initially. Alternative strategies, including non-open-source model sharing options, are explored. The paper concludes with recommendations for developers, standard-setting bodies, and governments for establishing safe and responsible model sharing practices and preserving open-source benefits where safe.

Suggested Citation

Seger, Elizabeth and Dreksler, Noemi and Moulange, Richard and Dardaman, Emily and Schuett, Jonas and Wei, K. and Winter, Christoph and Arnold, Mackenzie and Ó hÉigeartaigh, Seán and Korinek, Anton and Anderljung, Markus and Bucknall, Ben and Chan, Alan and Stafford, Eoghan and Koessler, Leonie and Ovadya, Aviv and Garfinkel, Ben and Bluemke, Emma and Aird, Michael and Levermore, Patrick and Hazell, Julian and Gupta, Abhishek, Open-Sourcing Highly Capable Foundation Models: An evaluation of risks, benefits, and alternative methods for pursuing open-source objectives (October 9, 2023). LawAI Working Paper No. 2-2023, Available at SSRN: https://ssrn.com/abstract=4596436 or http://dx.doi.org/10.2139/ssrn.4596436

Elizabeth Seger (Contact Author)

Centre for the Governance of AI

Oxford
United Kingdom

Noemi Dreksler

Centre for the Governance of AI

Richard Moulange

Centre for the Governance of AI

University of Cambridge - MRC Biostatistics Unit

Emily Dardaman

Independent ( email )

United States

Jonas Schuett

Centre for the Governance of AI ( email )

Oxford
United Kingdom

HOME PAGE: http://governance.ai

Goethe University Frankfurt ( email )

Frankfurt
Germany

Institute for Law & AI ( email )

1427 Cambridge St
#5
Cambridge, MA 02139
United States

K. Wei

Centre for the Governance of AI

Harvard University - Harvard Law School

Christoph Winter

University of Cambridge - Faculty of Law ( email )

10 West Road
Cambridge, CB3 9DZ
United Kingdom

HOME PAGE: http://www.christophwinter.net/

Harvard University ( email )

1875 Cambridge Street
Cambridge, MA 02138
United States

Institute for Law & AI ( email )

218 Harvard St
Cambridge, MA 02139
United States

Mackenzie Arnold

Institute for Law & AI ( email )

Seán Ó hÉigeartaigh

University of Cambridge - Leverhulme Centre for the Future of Intelligence

Anton Korinek

University of Virginia

1400 University Ave
Charlottesville, VA 22903
United States

National Bureau of Economic Research (NBER) ( email )

1050 Massachusetts Avenue
Cambridge, MA 02138
United States

Markus Anderljung

Centre for the Governance of AI ( email )

Ben Bucknall

Independent

Alan Chan

University of Montreal

Canada

Eoghan Stafford

Centre for the Governance of AI ( email )

Leonie Koessler

Centre for the Governance of AI ( email )

Aviv Ovadya

Thoughtful Technology Project

Ben Garfinkel

Centre for the Governance of AI

Emma Bluemke

Centre for the Governance of AI

Michael Aird

Institute for AI Policy & Strategy ( email )

Patrick Levermore

Institute for AI Policy & Strategy

Julian Hazell

University of Oxford - Oxford Internet Institute

Abhishek Gupta

Montreal AI Ethics Institute

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
435
Abstract Views
1,614
Rank
139,277
PlumX Metrics