Models for Classifying AI Systems: The Switch, the Ladder, and the Matrix

31 Pages Posted: 30 Jun 2022

See all articles by Jakob Mökander

Jakob Mökander

University of Oxford - Oxford Internet Institute; Princeton University - Center for Information Technology Policy

Margi Sheth

AstraZeneca

David Watson

King's College London

Luciano Floridi

Yale University - Digital Ethics Center; University of Bologna- Department of Legal Studies

Date Written: Jan 14, 2022

Abstract

Organisations that design and deploy systems based on artificial intelligence (AI) increasingly commit themselves to high-level, ethical principles. However, there still exists a gap between principles and practices in AI ethics. One major obstacle organisations face when attempting to operationalise AI Ethics is the lack of a well-defined material scope. Put differently, the question to which systems and processes AI ethics principles ought to apply remains unanswered. Of course, there exists no universally accepted definition of AI, and different systems pose different ethical challenges. Nevertheless, pragmatic problem-solving demands that things should be sorted so that their grouping will promote successful actions for some specific end. In this article, we review and compare previous attempts to classify AI systems for the purpose of implementing AI governance in practice. We find that attempts to classify AI systems found in previous literature use one of three mental models: the Switch, i.e., a binary approach according to which systems either are or are not considered AI systems depending on their characteristics; the Ladder, i.e., a risk-based approach that classifies systems according to the ethical risks they pose; and the Matrix, i.e., a multi-dimensional classification of systems that take various aspects into account, such as context, data input, and decision-model. Each of these models for classifying AI systems comes with its own set of strengths and weaknesses. By conceptualising different ways of classifying AI systems into simple mental models, we hope to provide organisations that design, deploy, or regulate AI systems with the conceptual tools needed to operationalise AI governance in practice.

Keywords: Artificial Intelligence, AI Systems, Ethics, Classification, Governance, Material Scope, Mental Models

Suggested Citation

Mökander, Jakob and Sheth, Margi and Watson, David and Floridi, Luciano, Models for Classifying AI Systems: The Switch, the Ladder, and the Matrix (Jan 14, 2022). Available at SSRN: https://ssrn.com/abstract=4141677 or http://dx.doi.org/10.2139/ssrn.4141677

Jakob Mökander (Contact Author)

University of Oxford - Oxford Internet Institute ( email )

Princeton University - Center for Information Technology Policy ( email )

C231A E-Quad
Olden Street
Princeton, NJ 08540
United States

Margi Sheth

AstraZeneca ( email )

Gaithersburg, MD
United States

David Watson

King's College London ( email )

Strand
London, England WC2R 2LS
United Kingdom

Luciano Floridi

Yale University - Digital Ethics Center ( email )

85 Trumbull Street
New Haven, CT CT 06511
United States
2034326473 (Phone)

University of Bologna- Department of Legal Studies ( email )

Via Zamboni 22
Bologna, Bo 40100
Italy

HOME PAGE: http://www.unibo.it/sitoweb/luciano.floridi/en

Do you have negative results from your research you’d like to share?

Paper statistics

Downloads
372
Abstract Views
1,441
Rank
147,967
PlumX Metrics