Weapons of Mass Disruption: Artificial Intelligence and International Law
Cambridge International Law Journal 10 (2021), 181–203
31 Pages Posted: 26 Apr 2021 Last revised: 13 Jan 2022
Date Written: December 1, 2021
Abstract
The answers each political community finds to the law reform questions posed artificial intelligence (AI) may differ, but a near-term threat is that AI systems capable of causing harm will not be confined to one jurisdiction — indeed, it may be impossible to link them to a specific jurisdiction at all. This is not a new problem in cybersecurity, though different national approaches to regulation will pose barriers to effective regulation exacerbated by the speed, autonomy, and opacity of AI systems. For that reason, some measure of collective action is needed. Lessons may be learned from efforts to regulate the global commons, as well as moves to outlaw certain products (weapons and drugs, for example) and activities (such as slavery and child sex tourism). The argument advanced here is that regulation, in the sense of public control, requires active involvement of states. To coordinate those activities and enforce global ‘red lines’, this paper posits a hypothetical International Artificial Intelligence Agency (IAIA), modelled on the agency created after the Second World War to promote peaceful uses of nuclear energy, while deterring or containing its weaponization and other harmful effects.
Keywords: artificial intelligence, international law, international organizations, international institutions, nuclear energy, regulation
JEL Classification: K23, K24, K33, K38, O32, O38
Suggested Citation: Suggested Citation